venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets
Abstract
Deep networks have achieved impressive results on a range of well curated benchmark datasets. Surprisingly, their performance remains sensitive to perturbations that have little effect on human performance. In this work, we propose a novel extension of Mixup called Robustmix that regularizes networks to classify based on lower frequency spatial features. We show that this type of regularization improves robustness on a range of benchmarks such as Imagenet-C and Stylized Imagenet. It adds little computational overhead and furthermore does not require a priori knowledge of a large set of image transformations. We find that this approach further complements recent advances in model architecture and data augmentation attaining a state-of-the-art mean corruption error (mCE) of 44.8 with an EfficientNet-B8 model and RandAugment, which is a reduction of 16 mCE compared to the baseline.
N/A
Deep networks have achieved impressive results on a range of well curated benchmark datasets. Surprisingly, their performance remains sensitive to perturbations that have little effect on human performance. In this work, we propose a novel extension of Mixup called Robustmix that regularizes networks to classify based on lower frequency spatial features. We show that this type of regularization improves robustness on a range of benchmarks such as Imagenet-C and Stylized Imagenet. It adds little computational overhead and furthermore does not require a priori knowledge of a large set of image transformations. We find that this approach further complements recent advances in model architecture and data augmentation attaining a state-of-the-art mean corruption error (mCE) of 44.8 with an EfficientNet-B8 model and RandAugment, which is a reduction of 16 mCE compared to the baseline.
1 INTRODUCTION
Deep neural networks have achieved state-of-the-art accuracy across a range of benchmark tasks such as image segmentation (Ren et al., 2015) and speech recognition (Hannun et al., 2014). These successes have led to the widespread adoption of neural networks in many real-life applications. However, while these networks perform well on curated benchmark datasets, their performance can suffer greatly in the presence of small data corruptions (Szegedy et al., 2014; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2017; Athalye et al., 2018; Hendrycks & Dietterich, 2018). This poses significant challenges to the application of deep networks.
Hendrycks & Dietterich (2018) show that the accuracy of a standard model on Imagenet can drop from 76% to 20% when evaluated on images corrupted with small visual transformations. This shows modern networks are not robust to certain small shifts in the data distribution. That is a concern because such shifts are common in many real-life applications. Secondly, Szegedy et al. (2014) show the existence of adversarial perturbations which are imperceptible to humans but have a disproportionate effect on the predictions of a network. This raises significant concerns about the safety of using deep networks in critical applications such as self driving cars (Sitawarin et al., 2018).
These problems have led to numerous proposals to improve the robustness of deep networks. Some of these methods such as those proposed by Hendrycks et al. (2019) require a priori knowledge of the visual transformations in the test domain. Others, such as Geirhos et al. (2018) use a deep network to generate transformations which comes with significant computation cost.
This paper proposes a new data augmentation technique to improve the robustness of deep networks by regularizing frequency bias. This new regularization technique is based on Mixup and has many advantages compared to related robustness regularizers: (1) it does not require knowledge of a large set of priori transformations, (2) it is inexpensive and (3) it doesn’t have many hyper-parameters. The key idea is to bias the network to rely more on lower spatial frequencies to make predictions.
We demonstrate on Imagenet-C that this method works well with recent advances and reaches a state-of-the-art mCE of 44.8 with 85.0 clean accuracy with Efficientnet-B8 and RandAugment(Cubuk et al., 2019). This is an improvement of 16 mCE compared to the baseline Efficientnet-B8 and matches ViT-L/16 (Dosovitskiy et al., 2020), which is trained on 300× more data. We find that our implementation of the method with DCT transform adds negligible overhead in our experiments. We
find that Robustmix improves accuracy on Stylized-Imagenet by up to 15 points and we show that it can increase adversarial robustness.
2 RELATED WORK
The proposed approach can be seen as a generalization of Mixup (Zhang et al., 2018). Mixup is a data augmentation method that regularizes models to behave more linearly between examples. It does so by training the model on linear interpolations of two input examples and their respective labels. These new examples are generated as follows
x̃ = mix(x1, x2, λ), where x1, x2 are input images ỹ = mix(y1, y2, λ), where y1, y2 are labels
with mix being the linear interpolation function
mix(x1, x2, λ) = λx1 + (1− λ)x2 (1)
where λ ∼ Beta(α, α), α is the Mixup coefficient hyper-parameter. Zhang et al. (2018) show that Mixup improves the accuracy of networks and can also improve the robustness of the network.
Augmix (Hendrycks et al., 2019) is a data augmentation technique to improve robustness by training on a mix of known image transformations. This method add little computational overhead, but requires knowledge of a diverse set of domain specific transformations. Hendrycks et al. (2019) mixes a set of 9 different augmentation to reach 68.4 mCE on Imagenet. In contrast, the proposed method does not rely on specific image augmentations and instead relies on the more general principle that natural images are a kind of signal where most of the energy is concentrated in the lower frequencies.
Zhang (2019) uses low pass filters directly inside the model to improve the frequency response of the network. Our method also makes use of low-pass filtering but does not completely remove high frequency features. Additionally, we only uses frequency filtering during training and therefore no computational overhead is incurred during evaluation.
3 METHOD
In this section, we introduce a novel extension of Mixup called Robustmix that increases robustness by regularizing the network to focus more on the low frequency features in the signal.
Motivation Wang et al. (2020) suggest that convolutional networks trade robustness for accuracy in their use of high frequency image features. Such features can be perturbed in ways that change the prediction of the model even though humans cannot perceive the change. This can lead models to
make puzzling mistakes such as with adversarial examples. Our aim is to increase robustness while retaining accuracy by regularizing how high frequency information is used by the model.
Robustmix We propose to regularize the sensitivity of the model to each frequency band by extending Mixup’s linear interpolations with a new type of band interpolation. The key insight is that we can condition the sensitivity to each band using images that mix the frequency bands of two different images. Suppose that we mix the lower frequency band of an image of a boathouse with the high frequency band of an image of a dog. We can encourage sensitivity to the lower band by training the model to predict dog for this mixed image. However, this approach is too simplistic because it completely disregards the impact of the image in the high band.
0.0 0.1 0.2 0.4 0.6 0.8 1.0 Low-pass cut-off
0.0
0.2
0.4
0.6
0.8
1.0
En er
gy
band.
Specifically, the mixing formula for Robustmix is given by
x̃ = Low(mix(x1, x2, λL), c) + High(mix(x1, x2, λH), c)
ỹ = λcmix(y1, y2, λL) + (1− λc)mix(y1, y2, λH)
where λL, λH ∼ Beta(α, α), α is the Mixup coefficient hyper-parameter, and Low(·, c),High(·, c) are a low pass and high pass filter respectively with a uniformly sampled cutoff frequency c ∈ [0, 1]. And λc is the coefficient that determines how much weight is given to the lower frequency band. It is given by the relative amount of energy in the lower frequency band for natural images
λc = E[‖Low(xi, c)‖2]
E[‖xi‖2] . (2)
This coefficient can be efficiently computed on a mini-batch of examples.
Implementation Computational overhead is an important consideration for data augmentation techniques since training deep networks is computationally intensive and practitioners have limited computational budget. We note that many popular techniques such as Mixup (Zhang et al., 2018) add little overhead.
The frequency separation is implemented using a Discrete Cosine Transform (DCT) to avoid the complex multiplication required by an Discrete Fourier Transform. We multiply the images with the 224x224 DCT matrix directly because the spatial dimensions are relatively small and (non-complex) matrix multiplication is well-optimized on modern accelerators. A batch of images is transformed into frequency space and the low and high pass filtered images must be transformed back to image space. Additionally, we must apply the DCT transform over the x and y dimension separately. Thus, 6 DCT matrix multiplications are required which results in 0.2 GFLOPs per image. In contrast, just the forward pass of ResNet50 requires 3.87 GFLOPs (Hasanpour et al., 2016).
In our implementation of Robustmix, we reorder commutative operations (low pass and mixing) in order to compute the DCT only a single time per minibatch. The pseudocode is provided in Algorithm 1, where reverse is a function that reverses the rows of its input matrix.
Algorithm 1 Robustmix Input: Minibatch of inputs X ∈ RN×H×W×D and labels Y ∈ RN×C , α ∈ R Output: Augmented minibatch of inputs X̃ ∈ RN×W×H×D and labels Ỹ ∈ RN×C λL, λH ∼ Beta(α, α) and c ∼ U(0, 1) L← Low(X, c) H ← 1− L λc ← ‖L‖ 2
‖X‖2
X̃ ← mix(L, reverse(L), λL) + mix(H, reverse(H), λH) Ỹ ← mix(Y, reverse(Y ), λc ∗ λL + (1− λc) ∗ λH)
4 RESULTS
4.1 DATASETS AND METRICS
ImageNet. ImageNet (Deng et al., 2009) is a classification dataset that contains 1.28 million training images and 50000 validation images with 1000 classes. We evaluate the common classification accuracy which will be referred to as clean accuracy. We use the standard Resnet preprocessing resulting in images of size 224x224 (He et al., 2015). The standard models, without any additional data augmentation process, will be qualified as the baseline.
ImageNet-C. This dataset is made of 15 types of corruption drawn from four main categories: noise, blur, weather and digital (Hendrycks & Dietterich, 2018). These corruptions are applied to the validation images of ImageNet at 5 different intensities or levels of severity. Following (Hendrycks & Dietterich, 2018), we evaluate the robustness of our method by reporting its mean corruption error (mCE) normalized with respect to AlexNet errors:
mCE =
∑ corruption c CEc
Total Number of Corruptions , with CEc =
∑ severity s
Ec,s∑ sE AlexNet c,s
Stylized-ImageNet. Stylized-ImageNet (SIN) is constructed from ImageNet by replacing the texture in the original image using style transfer, such that the texture gives a misleading cue about the image label (Geirhos et al., 2018). The 1000 classes from ImageNet are reduced to 16 shape categories, for instance all labels for dog species are grouped under one dog label, same for chair, car, etc. There are 1280 generated cue conflict images (80 per category). With SIN, we evaluate the classification accuracy (SIN accuracy) and measure the model’s shape bias. Following Geirhos et al. (2018), the model’s bias towards shape versus texture is measured as
shape bias = correct shapes
correct shapes + correct textures .
4.2 EXPERIMENTAL SETUP
We chose to do evaluations on residual nets (ResNet-50 and ResNet-152) and EfficientNets (EfficientNet-B0, EfficientNet-B1, EfficientNet-B5 and EfficientNet-B8). Experiments were run on 8x8 TPUv3 instances for the the bigger EfficientNets (EfficientNet-B5 and EfficientNet-B8); and the other experiments were run on 4x4 TPUv3 slices. For the Resnet models, we use the same standard training setup outlined in Goyal et al. (2017). However, we use cosine learning rate Loshchilov & Hutter (2016) with a single cycle for Resnets that are trained for 600 epochs.
4.3 ROBUSTNESS RESULTS
Imagenet-C First, we evaluate the effectiveness of the proposed method in improving robustness to the visual corruptions considered in Imagenet-C. In Table 1, we can see that Robustmix consistently improves robustness to the considered transformations, with a 15 point decrease in mCE over the baseline for ResNet-50. Robustmix with ResNet-50 achieves 61.2 mCE without degrading accuracy on the clean dataset compared to the baseline. In fact, we find a small improvement over the baseline
of 0.8% on the clean error. While Mixup yields a larger gain of 1.9% on the clean accuracy, we find that Robustmix improves mCE by up to 6 points more than Mixup. These results also compare favorably to Augmix, which needs to be combined with training on Stylized ImageNet (SIN) to reduce the mCE by 12 points. And this improvement comes at significant cost to the accuracy due to the use of the Stylized Imagenet dataset. We also observe a similar trade-off between accuracy and robustness as we can observe in Figure 3. We observe that Mixup consistently produces lower clean error for smaller models, but the accuracy gap with Robustmix disappears as the model gets bigger.
While it is not directly comparable to ViT-L/16 due to its use of 300× more data, we see that Efficientnet-B8 with Robustmix and RandAugment has better robustness at 44.8 mCE. It is also competitive with DeepAugment (Hendrycks et al., 2020) which requires training additional specialized image-to-image models on tasks such as super-resolution to produce augmented images. By comparison, our approach does not rely on extra data or extra trained models.
In our cross-validation of α, we found small values less than 0.2 perform poorly both on accuracy and mCE. Values of α such that 0.2 ≤ α ≤ 0.5 not only give the best accuracies and mCEs but also the best trade-off of mCE versus accuracy as bigger values of α have shown giving good values for accuracy but do not do as well on mCE. In our experiments, we found that we typically achieve good results with a frequency cutoff c sampled between [0, 1] as described in Algorithm 1. However, for ResNet-50 trained with a training budget that is too limited (200 instead of 600 epochs) and its smaller versions (ResNet-18 and ResNet-34), it can be beneficial to fix a minimum c ≥ τ for the cutoff by sampling in the interval [τ, 1]. The minimum cutoff determines the range at which band mixing will occur. We can remove band interpolation entirely and recover standard Mixup by setting τ = 1. For Resnet-50 with too few training epochs, we found that a good value for the minimum is 0.1, but we found much better results can be achieved with 600 epochs without any modications to Algorithm 1.
Stylized-ImageNet. We confirm that our method indeed increases both accuracy on Stylized ImageNet and the shape bias as shown in table 2. For ResNet-50, Robustmix almost doubles the shape
bias from baseline (from 19 to 37) and improves it by 63% over Mixup; while relative improvements on SIN accuracy are of 72% and 33% respectively over baseline and Mixup. The same observation for EfficientNet-B5 wich improves shape bias by near 50% and SIN accuracy by near 60 % over the baseline.
4.4 ADVERSARIAL PERTURBATIONS
In this section, we consider robustness to adversarial perturbations (Goodfellow et al., 2014). Adversarial perturbations make no visually distinguishable difference to the human eye but make models misclassify examples. Wang et al. (2020) and Yin et al. (2019) have shown that adversarial perturbations of unregularized models disproportionally affect higher frequencies. As we have seen in Figure 5, Robustmix encourages the model to rely more on the lower frequencies to make predictions. Our hypothesis is that this will have a beneficial effect on adversarial robustness. For our experiment we use one of the first methods proposed in the Deep Learning community to construct adversarial examples, the ”Fast Gradient Sign Method”(FGSM) (Goodfellow et al., 2014). The adversarial example is constructed by adding a perturbation proportional to the sign of the gradient of the loss with respect to the input: x = x + sign (∆xJ(θ,x, y)) . As seen in Figure 4, we find Robustmix is more robust than the baselines to this type of adversarial attack.
4.5 ANALYSIS AND DISCUSSION
Low frequency bias In order to quantify the degree to which models rely on lower frequencies, we measure how much accuracy drops as we remove higher frequency information with a low-pass filter. Figure 5 shows that Robustmix is comparatively more robust to the removal of high frequencies. This indicates that models trained with Robustmix rely significantly less on these high-frequency features to make accurate predictions.
5 CONCLUSION
In this paper, we have introduced a new method to improve robustness called Robustmix which regularizes models to focus more on lower spatial frequencies to make predictions. We have shown that this method yields improved robustness on a range of benchmarks including Imagenet-C and Stylized Imagenet. In particular, this approach attains an mCE of 44.8 on Imagenet-C with Efficientnet-B8, which is competitive with models trained on 300× more data. Our method offers a promising new research direction for robustness with a number of open challenges. We have used a standard DCT based low-pass filter on images and L2 energy metric to determine the contribution of each label. This leaves many alternatives to be explored, such as: different data modalities like audio; more advanced frequency separation techniques like Wavelets; and alternative contribution metrics for mixing labels. | 1. What is the focus and contribution of the paper on improving the robustness of image classification/segmentation?
2. What are the strengths and weaknesses of the proposed frequency-guided data augmentation approach?
3. Do you have any concerns regarding the generalization of the approach, particularly in terms of its application to different types of images?
4. How does the reviewer assess the effectiveness and relevance of the comparisons made in the paper, specifically with respect to Stylized-ImageNet and ImageNet-C?
5. What are some potential limitations or areas for improvement in the proposed method, such as the use of separate 1-D DCTs instead of a single 2-D DCT? | Summary Of The Paper
Review | Summary Of The Paper
This paper discusses a new proposal on improving the robustness of image classification/segmentation by a frequency-guided data augmentation approach. The proposed technique is based on a DCT transformation to determine frequency bands with high energy in order with respect to their sensitivity. The data augmentation will be done by a linear combination of images weighted by the energy contribution of the different energy bands.
Review
The approach is interesting but there are still some open questions:
Generalization of the approach: The generalization of the approach is not thoroughly discussed. The frequency sensitivity is dependent on the image type. The application of your approach e.g. to technical drawings with sharp edges will behave differently than applying it to images with smooth gradients. Furthermore, the discussion of the approach based on Stylized-ImageNet and ImageNet-C is not really convincing to discuss the quality of the approach in general. Stylized-ImageNet aims towards achieving robustness with respect to different (higher-order) textures. Hence, the basic construction metric of both approaches are related, so that similar results can be expected.
A comparison with images “augmented” by real sensor/environment noise would be interesting for the reader. Likewise, ImageNet-C only applies very artificial noise to the image dataset, which is not really comparable to real noise, especially the very artificial snow, rain and fog effects without consideration of the depth of the image scene.
But of course, this is a serious point of criticism of many robustness augmentation approaches being proposed the last years since most of the proposed approaches are not really free of systematic domain shifts.
DCT calculation: it becomes not clear why tweo 1-D DCTs are separately calculated instead of a single 2-D DCT. The authors should clarify if the rationale behind this is just complexity and what is the impact w.r.t. accuracy.
Minor comments:
Include the term mean corruption error to the paper abstract to explain the acronym mCE. |
ICLR | Title
Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets
Abstract
Deep networks have achieved impressive results on a range of well curated benchmark datasets. Surprisingly, their performance remains sensitive to perturbations that have little effect on human performance. In this work, we propose a novel extension of Mixup called Robustmix that regularizes networks to classify based on lower frequency spatial features. We show that this type of regularization improves robustness on a range of benchmarks such as Imagenet-C and Stylized Imagenet. It adds little computational overhead and furthermore does not require a priori knowledge of a large set of image transformations. We find that this approach further complements recent advances in model architecture and data augmentation attaining a state-of-the-art mean corruption error (mCE) of 44.8 with an EfficientNet-B8 model and RandAugment, which is a reduction of 16 mCE compared to the baseline.
N/A
Deep networks have achieved impressive results on a range of well curated benchmark datasets. Surprisingly, their performance remains sensitive to perturbations that have little effect on human performance. In this work, we propose a novel extension of Mixup called Robustmix that regularizes networks to classify based on lower frequency spatial features. We show that this type of regularization improves robustness on a range of benchmarks such as Imagenet-C and Stylized Imagenet. It adds little computational overhead and furthermore does not require a priori knowledge of a large set of image transformations. We find that this approach further complements recent advances in model architecture and data augmentation attaining a state-of-the-art mean corruption error (mCE) of 44.8 with an EfficientNet-B8 model and RandAugment, which is a reduction of 16 mCE compared to the baseline.
1 INTRODUCTION
Deep neural networks have achieved state-of-the-art accuracy across a range of benchmark tasks such as image segmentation (Ren et al., 2015) and speech recognition (Hannun et al., 2014). These successes have led to the widespread adoption of neural networks in many real-life applications. However, while these networks perform well on curated benchmark datasets, their performance can suffer greatly in the presence of small data corruptions (Szegedy et al., 2014; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2017; Athalye et al., 2018; Hendrycks & Dietterich, 2018). This poses significant challenges to the application of deep networks.
Hendrycks & Dietterich (2018) show that the accuracy of a standard model on Imagenet can drop from 76% to 20% when evaluated on images corrupted with small visual transformations. This shows modern networks are not robust to certain small shifts in the data distribution. That is a concern because such shifts are common in many real-life applications. Secondly, Szegedy et al. (2014) show the existence of adversarial perturbations which are imperceptible to humans but have a disproportionate effect on the predictions of a network. This raises significant concerns about the safety of using deep networks in critical applications such as self driving cars (Sitawarin et al., 2018).
These problems have led to numerous proposals to improve the robustness of deep networks. Some of these methods such as those proposed by Hendrycks et al. (2019) require a priori knowledge of the visual transformations in the test domain. Others, such as Geirhos et al. (2018) use a deep network to generate transformations which comes with significant computation cost.
This paper proposes a new data augmentation technique to improve the robustness of deep networks by regularizing frequency bias. This new regularization technique is based on Mixup and has many advantages compared to related robustness regularizers: (1) it does not require knowledge of a large set of priori transformations, (2) it is inexpensive and (3) it doesn’t have many hyper-parameters. The key idea is to bias the network to rely more on lower spatial frequencies to make predictions.
We demonstrate on Imagenet-C that this method works well with recent advances and reaches a state-of-the-art mCE of 44.8 with 85.0 clean accuracy with Efficientnet-B8 and RandAugment(Cubuk et al., 2019). This is an improvement of 16 mCE compared to the baseline Efficientnet-B8 and matches ViT-L/16 (Dosovitskiy et al., 2020), which is trained on 300× more data. We find that our implementation of the method with DCT transform adds negligible overhead in our experiments. We
find that Robustmix improves accuracy on Stylized-Imagenet by up to 15 points and we show that it can increase adversarial robustness.
2 RELATED WORK
The proposed approach can be seen as a generalization of Mixup (Zhang et al., 2018). Mixup is a data augmentation method that regularizes models to behave more linearly between examples. It does so by training the model on linear interpolations of two input examples and their respective labels. These new examples are generated as follows
x̃ = mix(x1, x2, λ), where x1, x2 are input images ỹ = mix(y1, y2, λ), where y1, y2 are labels
with mix being the linear interpolation function
mix(x1, x2, λ) = λx1 + (1− λ)x2 (1)
where λ ∼ Beta(α, α), α is the Mixup coefficient hyper-parameter. Zhang et al. (2018) show that Mixup improves the accuracy of networks and can also improve the robustness of the network.
Augmix (Hendrycks et al., 2019) is a data augmentation technique to improve robustness by training on a mix of known image transformations. This method add little computational overhead, but requires knowledge of a diverse set of domain specific transformations. Hendrycks et al. (2019) mixes a set of 9 different augmentation to reach 68.4 mCE on Imagenet. In contrast, the proposed method does not rely on specific image augmentations and instead relies on the more general principle that natural images are a kind of signal where most of the energy is concentrated in the lower frequencies.
Zhang (2019) uses low pass filters directly inside the model to improve the frequency response of the network. Our method also makes use of low-pass filtering but does not completely remove high frequency features. Additionally, we only uses frequency filtering during training and therefore no computational overhead is incurred during evaluation.
3 METHOD
In this section, we introduce a novel extension of Mixup called Robustmix that increases robustness by regularizing the network to focus more on the low frequency features in the signal.
Motivation Wang et al. (2020) suggest that convolutional networks trade robustness for accuracy in their use of high frequency image features. Such features can be perturbed in ways that change the prediction of the model even though humans cannot perceive the change. This can lead models to
make puzzling mistakes such as with adversarial examples. Our aim is to increase robustness while retaining accuracy by regularizing how high frequency information is used by the model.
Robustmix We propose to regularize the sensitivity of the model to each frequency band by extending Mixup’s linear interpolations with a new type of band interpolation. The key insight is that we can condition the sensitivity to each band using images that mix the frequency bands of two different images. Suppose that we mix the lower frequency band of an image of a boathouse with the high frequency band of an image of a dog. We can encourage sensitivity to the lower band by training the model to predict dog for this mixed image. However, this approach is too simplistic because it completely disregards the impact of the image in the high band.
0.0 0.1 0.2 0.4 0.6 0.8 1.0 Low-pass cut-off
0.0
0.2
0.4
0.6
0.8
1.0
En er
gy
band.
Specifically, the mixing formula for Robustmix is given by
x̃ = Low(mix(x1, x2, λL), c) + High(mix(x1, x2, λH), c)
ỹ = λcmix(y1, y2, λL) + (1− λc)mix(y1, y2, λH)
where λL, λH ∼ Beta(α, α), α is the Mixup coefficient hyper-parameter, and Low(·, c),High(·, c) are a low pass and high pass filter respectively with a uniformly sampled cutoff frequency c ∈ [0, 1]. And λc is the coefficient that determines how much weight is given to the lower frequency band. It is given by the relative amount of energy in the lower frequency band for natural images
λc = E[‖Low(xi, c)‖2]
E[‖xi‖2] . (2)
This coefficient can be efficiently computed on a mini-batch of examples.
Implementation Computational overhead is an important consideration for data augmentation techniques since training deep networks is computationally intensive and practitioners have limited computational budget. We note that many popular techniques such as Mixup (Zhang et al., 2018) add little overhead.
The frequency separation is implemented using a Discrete Cosine Transform (DCT) to avoid the complex multiplication required by an Discrete Fourier Transform. We multiply the images with the 224x224 DCT matrix directly because the spatial dimensions are relatively small and (non-complex) matrix multiplication is well-optimized on modern accelerators. A batch of images is transformed into frequency space and the low and high pass filtered images must be transformed back to image space. Additionally, we must apply the DCT transform over the x and y dimension separately. Thus, 6 DCT matrix multiplications are required which results in 0.2 GFLOPs per image. In contrast, just the forward pass of ResNet50 requires 3.87 GFLOPs (Hasanpour et al., 2016).
In our implementation of Robustmix, we reorder commutative operations (low pass and mixing) in order to compute the DCT only a single time per minibatch. The pseudocode is provided in Algorithm 1, where reverse is a function that reverses the rows of its input matrix.
Algorithm 1 Robustmix Input: Minibatch of inputs X ∈ RN×H×W×D and labels Y ∈ RN×C , α ∈ R Output: Augmented minibatch of inputs X̃ ∈ RN×W×H×D and labels Ỹ ∈ RN×C λL, λH ∼ Beta(α, α) and c ∼ U(0, 1) L← Low(X, c) H ← 1− L λc ← ‖L‖ 2
‖X‖2
X̃ ← mix(L, reverse(L), λL) + mix(H, reverse(H), λH) Ỹ ← mix(Y, reverse(Y ), λc ∗ λL + (1− λc) ∗ λH)
4 RESULTS
4.1 DATASETS AND METRICS
ImageNet. ImageNet (Deng et al., 2009) is a classification dataset that contains 1.28 million training images and 50000 validation images with 1000 classes. We evaluate the common classification accuracy which will be referred to as clean accuracy. We use the standard Resnet preprocessing resulting in images of size 224x224 (He et al., 2015). The standard models, without any additional data augmentation process, will be qualified as the baseline.
ImageNet-C. This dataset is made of 15 types of corruption drawn from four main categories: noise, blur, weather and digital (Hendrycks & Dietterich, 2018). These corruptions are applied to the validation images of ImageNet at 5 different intensities or levels of severity. Following (Hendrycks & Dietterich, 2018), we evaluate the robustness of our method by reporting its mean corruption error (mCE) normalized with respect to AlexNet errors:
mCE =
∑ corruption c CEc
Total Number of Corruptions , with CEc =
∑ severity s
Ec,s∑ sE AlexNet c,s
Stylized-ImageNet. Stylized-ImageNet (SIN) is constructed from ImageNet by replacing the texture in the original image using style transfer, such that the texture gives a misleading cue about the image label (Geirhos et al., 2018). The 1000 classes from ImageNet are reduced to 16 shape categories, for instance all labels for dog species are grouped under one dog label, same for chair, car, etc. There are 1280 generated cue conflict images (80 per category). With SIN, we evaluate the classification accuracy (SIN accuracy) and measure the model’s shape bias. Following Geirhos et al. (2018), the model’s bias towards shape versus texture is measured as
shape bias = correct shapes
correct shapes + correct textures .
4.2 EXPERIMENTAL SETUP
We chose to do evaluations on residual nets (ResNet-50 and ResNet-152) and EfficientNets (EfficientNet-B0, EfficientNet-B1, EfficientNet-B5 and EfficientNet-B8). Experiments were run on 8x8 TPUv3 instances for the the bigger EfficientNets (EfficientNet-B5 and EfficientNet-B8); and the other experiments were run on 4x4 TPUv3 slices. For the Resnet models, we use the same standard training setup outlined in Goyal et al. (2017). However, we use cosine learning rate Loshchilov & Hutter (2016) with a single cycle for Resnets that are trained for 600 epochs.
4.3 ROBUSTNESS RESULTS
Imagenet-C First, we evaluate the effectiveness of the proposed method in improving robustness to the visual corruptions considered in Imagenet-C. In Table 1, we can see that Robustmix consistently improves robustness to the considered transformations, with a 15 point decrease in mCE over the baseline for ResNet-50. Robustmix with ResNet-50 achieves 61.2 mCE without degrading accuracy on the clean dataset compared to the baseline. In fact, we find a small improvement over the baseline
of 0.8% on the clean error. While Mixup yields a larger gain of 1.9% on the clean accuracy, we find that Robustmix improves mCE by up to 6 points more than Mixup. These results also compare favorably to Augmix, which needs to be combined with training on Stylized ImageNet (SIN) to reduce the mCE by 12 points. And this improvement comes at significant cost to the accuracy due to the use of the Stylized Imagenet dataset. We also observe a similar trade-off between accuracy and robustness as we can observe in Figure 3. We observe that Mixup consistently produces lower clean error for smaller models, but the accuracy gap with Robustmix disappears as the model gets bigger.
While it is not directly comparable to ViT-L/16 due to its use of 300× more data, we see that Efficientnet-B8 with Robustmix and RandAugment has better robustness at 44.8 mCE. It is also competitive with DeepAugment (Hendrycks et al., 2020) which requires training additional specialized image-to-image models on tasks such as super-resolution to produce augmented images. By comparison, our approach does not rely on extra data or extra trained models.
In our cross-validation of α, we found small values less than 0.2 perform poorly both on accuracy and mCE. Values of α such that 0.2 ≤ α ≤ 0.5 not only give the best accuracies and mCEs but also the best trade-off of mCE versus accuracy as bigger values of α have shown giving good values for accuracy but do not do as well on mCE. In our experiments, we found that we typically achieve good results with a frequency cutoff c sampled between [0, 1] as described in Algorithm 1. However, for ResNet-50 trained with a training budget that is too limited (200 instead of 600 epochs) and its smaller versions (ResNet-18 and ResNet-34), it can be beneficial to fix a minimum c ≥ τ for the cutoff by sampling in the interval [τ, 1]. The minimum cutoff determines the range at which band mixing will occur. We can remove band interpolation entirely and recover standard Mixup by setting τ = 1. For Resnet-50 with too few training epochs, we found that a good value for the minimum is 0.1, but we found much better results can be achieved with 600 epochs without any modications to Algorithm 1.
Stylized-ImageNet. We confirm that our method indeed increases both accuracy on Stylized ImageNet and the shape bias as shown in table 2. For ResNet-50, Robustmix almost doubles the shape
bias from baseline (from 19 to 37) and improves it by 63% over Mixup; while relative improvements on SIN accuracy are of 72% and 33% respectively over baseline and Mixup. The same observation for EfficientNet-B5 wich improves shape bias by near 50% and SIN accuracy by near 60 % over the baseline.
4.4 ADVERSARIAL PERTURBATIONS
In this section, we consider robustness to adversarial perturbations (Goodfellow et al., 2014). Adversarial perturbations make no visually distinguishable difference to the human eye but make models misclassify examples. Wang et al. (2020) and Yin et al. (2019) have shown that adversarial perturbations of unregularized models disproportionally affect higher frequencies. As we have seen in Figure 5, Robustmix encourages the model to rely more on the lower frequencies to make predictions. Our hypothesis is that this will have a beneficial effect on adversarial robustness. For our experiment we use one of the first methods proposed in the Deep Learning community to construct adversarial examples, the ”Fast Gradient Sign Method”(FGSM) (Goodfellow et al., 2014). The adversarial example is constructed by adding a perturbation proportional to the sign of the gradient of the loss with respect to the input: x = x + sign (∆xJ(θ,x, y)) . As seen in Figure 4, we find Robustmix is more robust than the baselines to this type of adversarial attack.
4.5 ANALYSIS AND DISCUSSION
Low frequency bias In order to quantify the degree to which models rely on lower frequencies, we measure how much accuracy drops as we remove higher frequency information with a low-pass filter. Figure 5 shows that Robustmix is comparatively more robust to the removal of high frequencies. This indicates that models trained with Robustmix rely significantly less on these high-frequency features to make accurate predictions.
5 CONCLUSION
In this paper, we have introduced a new method to improve robustness called Robustmix which regularizes models to focus more on lower spatial frequencies to make predictions. We have shown that this method yields improved robustness on a range of benchmarks including Imagenet-C and Stylized Imagenet. In particular, this approach attains an mCE of 44.8 on Imagenet-C with Efficientnet-B8, which is competitive with models trained on 300× more data. Our method offers a promising new research direction for robustness with a number of open challenges. We have used a standard DCT based low-pass filter on images and L2 energy metric to determine the contribution of each label. This leaves many alternatives to be explored, such as: different data modalities like audio; more advanced frequency separation techniques like Wavelets; and alternative contribution metrics for mixing labels. | 1. What is the focus of the paper regarding data augmentation?
2. What are the strengths and weaknesses of the proposed approach compared to other methods?
3. Do you have any concerns regarding the evaluation and comparisons made in the paper?
4. How does the reviewer assess the novelty and effectiveness of the proposed method?
5. Are there any questions or suggestions for future work related to the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a novel data augmentation method for training classifiers. The idea is to mix two images in two different ways using standard Mixup, and then compose a final image by taking different frequency bands from the two mixed images. The training label is based on signal energy in different frequency bands, generally strongly favoring the label of the low-frequency content. Evaluations suggest the new scheme is more robust against image corruptions than previous methods, but not all relevant (combinations of) methods have been evaluated.
Review
The paper is well written and the method is easy to understand. There are lots of comparison runs, and these illuminate the strengths and weaknesses of the method to some degree. However, I find the evaluation still somewhat lacking.
In particular, the method appears to not improve over standard Mixup in terms of clean accuracy, and only wins convincingly when measuring the corruption error mCE. However, these corruptions include effects such as blur and weathering that corrupt the high frequencies, so this victory is not very surprising given how Robustmix explicitly creates a low-frequency bias.
The obvious challenger to Robustmix (and Mixup, for that matter) would be augmenting training data with corruptions that resemble the ones used in the measurements, but this approach has not been measured in isolation, except for AugMix with ResNet-50. But AugMix explicitly removes the sharpness augmentation which can remove high frequencies, so the benefit of RobustMix there (in terms of mCE) could be just because of its ability to corrupt high frequencies in the training data. RandAugment provides plenty of variation to training data, but it is not tested on its own, only when combined with Robustmix. I would have hoped to see tests with RandAugment but without Robustmix to gauge their effectiveness separately. Moreover, the tests with EfficientNet-B8 include neither Mixup or RandAugment alone, or their combination. As such, these measurements are not orthogonal enough to draw firm conclusions about the relative benefits of Robustmix.
The paper also does not measure a simpler alternative where low and high frequencies are taken from different images (without mixing) and the label is decided based on spectral energy as in Robustmix. Variants like this are briefly discussed on Page 3 but dismissed without measurements. The point about encouraging linearity within a frequency band makes sense to me, but it would be nice to see that backed up with data.
The extra tests in Sections 4.4 and 4.5, related to adversarial perturbations and low-frequency bias, are very interesting. However, their usefulness is limited because they do not specify the test setup, i.e., which network was used or how it was trained. For completeness, these tests should also discuss how the results compare against state of the art. It is to be expected that methods geared explicitly for these purposes (e.g., adversarial training) are better in that regard, but it would be interesting to know how far the proposed method gets.
The paper is missing a citation to RandAugment. |
ICLR | Title
Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets
Abstract
Deep networks have achieved impressive results on a range of well curated benchmark datasets. Surprisingly, their performance remains sensitive to perturbations that have little effect on human performance. In this work, we propose a novel extension of Mixup called Robustmix that regularizes networks to classify based on lower frequency spatial features. We show that this type of regularization improves robustness on a range of benchmarks such as Imagenet-C and Stylized Imagenet. It adds little computational overhead and furthermore does not require a priori knowledge of a large set of image transformations. We find that this approach further complements recent advances in model architecture and data augmentation attaining a state-of-the-art mean corruption error (mCE) of 44.8 with an EfficientNet-B8 model and RandAugment, which is a reduction of 16 mCE compared to the baseline.
N/A
Deep networks have achieved impressive results on a range of well curated benchmark datasets. Surprisingly, their performance remains sensitive to perturbations that have little effect on human performance. In this work, we propose a novel extension of Mixup called Robustmix that regularizes networks to classify based on lower frequency spatial features. We show that this type of regularization improves robustness on a range of benchmarks such as Imagenet-C and Stylized Imagenet. It adds little computational overhead and furthermore does not require a priori knowledge of a large set of image transformations. We find that this approach further complements recent advances in model architecture and data augmentation attaining a state-of-the-art mean corruption error (mCE) of 44.8 with an EfficientNet-B8 model and RandAugment, which is a reduction of 16 mCE compared to the baseline.
1 INTRODUCTION
Deep neural networks have achieved state-of-the-art accuracy across a range of benchmark tasks such as image segmentation (Ren et al., 2015) and speech recognition (Hannun et al., 2014). These successes have led to the widespread adoption of neural networks in many real-life applications. However, while these networks perform well on curated benchmark datasets, their performance can suffer greatly in the presence of small data corruptions (Szegedy et al., 2014; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2017; Athalye et al., 2018; Hendrycks & Dietterich, 2018). This poses significant challenges to the application of deep networks.
Hendrycks & Dietterich (2018) show that the accuracy of a standard model on Imagenet can drop from 76% to 20% when evaluated on images corrupted with small visual transformations. This shows modern networks are not robust to certain small shifts in the data distribution. That is a concern because such shifts are common in many real-life applications. Secondly, Szegedy et al. (2014) show the existence of adversarial perturbations which are imperceptible to humans but have a disproportionate effect on the predictions of a network. This raises significant concerns about the safety of using deep networks in critical applications such as self driving cars (Sitawarin et al., 2018).
These problems have led to numerous proposals to improve the robustness of deep networks. Some of these methods such as those proposed by Hendrycks et al. (2019) require a priori knowledge of the visual transformations in the test domain. Others, such as Geirhos et al. (2018) use a deep network to generate transformations which comes with significant computation cost.
This paper proposes a new data augmentation technique to improve the robustness of deep networks by regularizing frequency bias. This new regularization technique is based on Mixup and has many advantages compared to related robustness regularizers: (1) it does not require knowledge of a large set of priori transformations, (2) it is inexpensive and (3) it doesn’t have many hyper-parameters. The key idea is to bias the network to rely more on lower spatial frequencies to make predictions.
We demonstrate on Imagenet-C that this method works well with recent advances and reaches a state-of-the-art mCE of 44.8 with 85.0 clean accuracy with Efficientnet-B8 and RandAugment(Cubuk et al., 2019). This is an improvement of 16 mCE compared to the baseline Efficientnet-B8 and matches ViT-L/16 (Dosovitskiy et al., 2020), which is trained on 300× more data. We find that our implementation of the method with DCT transform adds negligible overhead in our experiments. We
find that Robustmix improves accuracy on Stylized-Imagenet by up to 15 points and we show that it can increase adversarial robustness.
2 RELATED WORK
The proposed approach can be seen as a generalization of Mixup (Zhang et al., 2018). Mixup is a data augmentation method that regularizes models to behave more linearly between examples. It does so by training the model on linear interpolations of two input examples and their respective labels. These new examples are generated as follows
x̃ = mix(x1, x2, λ), where x1, x2 are input images ỹ = mix(y1, y2, λ), where y1, y2 are labels
with mix being the linear interpolation function
mix(x1, x2, λ) = λx1 + (1− λ)x2 (1)
where λ ∼ Beta(α, α), α is the Mixup coefficient hyper-parameter. Zhang et al. (2018) show that Mixup improves the accuracy of networks and can also improve the robustness of the network.
Augmix (Hendrycks et al., 2019) is a data augmentation technique to improve robustness by training on a mix of known image transformations. This method add little computational overhead, but requires knowledge of a diverse set of domain specific transformations. Hendrycks et al. (2019) mixes a set of 9 different augmentation to reach 68.4 mCE on Imagenet. In contrast, the proposed method does not rely on specific image augmentations and instead relies on the more general principle that natural images are a kind of signal where most of the energy is concentrated in the lower frequencies.
Zhang (2019) uses low pass filters directly inside the model to improve the frequency response of the network. Our method also makes use of low-pass filtering but does not completely remove high frequency features. Additionally, we only uses frequency filtering during training and therefore no computational overhead is incurred during evaluation.
3 METHOD
In this section, we introduce a novel extension of Mixup called Robustmix that increases robustness by regularizing the network to focus more on the low frequency features in the signal.
Motivation Wang et al. (2020) suggest that convolutional networks trade robustness for accuracy in their use of high frequency image features. Such features can be perturbed in ways that change the prediction of the model even though humans cannot perceive the change. This can lead models to
make puzzling mistakes such as with adversarial examples. Our aim is to increase robustness while retaining accuracy by regularizing how high frequency information is used by the model.
Robustmix We propose to regularize the sensitivity of the model to each frequency band by extending Mixup’s linear interpolations with a new type of band interpolation. The key insight is that we can condition the sensitivity to each band using images that mix the frequency bands of two different images. Suppose that we mix the lower frequency band of an image of a boathouse with the high frequency band of an image of a dog. We can encourage sensitivity to the lower band by training the model to predict dog for this mixed image. However, this approach is too simplistic because it completely disregards the impact of the image in the high band.
0.0 0.1 0.2 0.4 0.6 0.8 1.0 Low-pass cut-off
0.0
0.2
0.4
0.6
0.8
1.0
En er
gy
band.
Specifically, the mixing formula for Robustmix is given by
x̃ = Low(mix(x1, x2, λL), c) + High(mix(x1, x2, λH), c)
ỹ = λcmix(y1, y2, λL) + (1− λc)mix(y1, y2, λH)
where λL, λH ∼ Beta(α, α), α is the Mixup coefficient hyper-parameter, and Low(·, c),High(·, c) are a low pass and high pass filter respectively with a uniformly sampled cutoff frequency c ∈ [0, 1]. And λc is the coefficient that determines how much weight is given to the lower frequency band. It is given by the relative amount of energy in the lower frequency band for natural images
λc = E[‖Low(xi, c)‖2]
E[‖xi‖2] . (2)
This coefficient can be efficiently computed on a mini-batch of examples.
Implementation Computational overhead is an important consideration for data augmentation techniques since training deep networks is computationally intensive and practitioners have limited computational budget. We note that many popular techniques such as Mixup (Zhang et al., 2018) add little overhead.
The frequency separation is implemented using a Discrete Cosine Transform (DCT) to avoid the complex multiplication required by an Discrete Fourier Transform. We multiply the images with the 224x224 DCT matrix directly because the spatial dimensions are relatively small and (non-complex) matrix multiplication is well-optimized on modern accelerators. A batch of images is transformed into frequency space and the low and high pass filtered images must be transformed back to image space. Additionally, we must apply the DCT transform over the x and y dimension separately. Thus, 6 DCT matrix multiplications are required which results in 0.2 GFLOPs per image. In contrast, just the forward pass of ResNet50 requires 3.87 GFLOPs (Hasanpour et al., 2016).
In our implementation of Robustmix, we reorder commutative operations (low pass and mixing) in order to compute the DCT only a single time per minibatch. The pseudocode is provided in Algorithm 1, where reverse is a function that reverses the rows of its input matrix.
Algorithm 1 Robustmix Input: Minibatch of inputs X ∈ RN×H×W×D and labels Y ∈ RN×C , α ∈ R Output: Augmented minibatch of inputs X̃ ∈ RN×W×H×D and labels Ỹ ∈ RN×C λL, λH ∼ Beta(α, α) and c ∼ U(0, 1) L← Low(X, c) H ← 1− L λc ← ‖L‖ 2
‖X‖2
X̃ ← mix(L, reverse(L), λL) + mix(H, reverse(H), λH) Ỹ ← mix(Y, reverse(Y ), λc ∗ λL + (1− λc) ∗ λH)
4 RESULTS
4.1 DATASETS AND METRICS
ImageNet. ImageNet (Deng et al., 2009) is a classification dataset that contains 1.28 million training images and 50000 validation images with 1000 classes. We evaluate the common classification accuracy which will be referred to as clean accuracy. We use the standard Resnet preprocessing resulting in images of size 224x224 (He et al., 2015). The standard models, without any additional data augmentation process, will be qualified as the baseline.
ImageNet-C. This dataset is made of 15 types of corruption drawn from four main categories: noise, blur, weather and digital (Hendrycks & Dietterich, 2018). These corruptions are applied to the validation images of ImageNet at 5 different intensities or levels of severity. Following (Hendrycks & Dietterich, 2018), we evaluate the robustness of our method by reporting its mean corruption error (mCE) normalized with respect to AlexNet errors:
mCE =
∑ corruption c CEc
Total Number of Corruptions , with CEc =
∑ severity s
Ec,s∑ sE AlexNet c,s
Stylized-ImageNet. Stylized-ImageNet (SIN) is constructed from ImageNet by replacing the texture in the original image using style transfer, such that the texture gives a misleading cue about the image label (Geirhos et al., 2018). The 1000 classes from ImageNet are reduced to 16 shape categories, for instance all labels for dog species are grouped under one dog label, same for chair, car, etc. There are 1280 generated cue conflict images (80 per category). With SIN, we evaluate the classification accuracy (SIN accuracy) and measure the model’s shape bias. Following Geirhos et al. (2018), the model’s bias towards shape versus texture is measured as
shape bias = correct shapes
correct shapes + correct textures .
4.2 EXPERIMENTAL SETUP
We chose to do evaluations on residual nets (ResNet-50 and ResNet-152) and EfficientNets (EfficientNet-B0, EfficientNet-B1, EfficientNet-B5 and EfficientNet-B8). Experiments were run on 8x8 TPUv3 instances for the the bigger EfficientNets (EfficientNet-B5 and EfficientNet-B8); and the other experiments were run on 4x4 TPUv3 slices. For the Resnet models, we use the same standard training setup outlined in Goyal et al. (2017). However, we use cosine learning rate Loshchilov & Hutter (2016) with a single cycle for Resnets that are trained for 600 epochs.
4.3 ROBUSTNESS RESULTS
Imagenet-C First, we evaluate the effectiveness of the proposed method in improving robustness to the visual corruptions considered in Imagenet-C. In Table 1, we can see that Robustmix consistently improves robustness to the considered transformations, with a 15 point decrease in mCE over the baseline for ResNet-50. Robustmix with ResNet-50 achieves 61.2 mCE without degrading accuracy on the clean dataset compared to the baseline. In fact, we find a small improvement over the baseline
of 0.8% on the clean error. While Mixup yields a larger gain of 1.9% on the clean accuracy, we find that Robustmix improves mCE by up to 6 points more than Mixup. These results also compare favorably to Augmix, which needs to be combined with training on Stylized ImageNet (SIN) to reduce the mCE by 12 points. And this improvement comes at significant cost to the accuracy due to the use of the Stylized Imagenet dataset. We also observe a similar trade-off between accuracy and robustness as we can observe in Figure 3. We observe that Mixup consistently produces lower clean error for smaller models, but the accuracy gap with Robustmix disappears as the model gets bigger.
While it is not directly comparable to ViT-L/16 due to its use of 300× more data, we see that Efficientnet-B8 with Robustmix and RandAugment has better robustness at 44.8 mCE. It is also competitive with DeepAugment (Hendrycks et al., 2020) which requires training additional specialized image-to-image models on tasks such as super-resolution to produce augmented images. By comparison, our approach does not rely on extra data or extra trained models.
In our cross-validation of α, we found small values less than 0.2 perform poorly both on accuracy and mCE. Values of α such that 0.2 ≤ α ≤ 0.5 not only give the best accuracies and mCEs but also the best trade-off of mCE versus accuracy as bigger values of α have shown giving good values for accuracy but do not do as well on mCE. In our experiments, we found that we typically achieve good results with a frequency cutoff c sampled between [0, 1] as described in Algorithm 1. However, for ResNet-50 trained with a training budget that is too limited (200 instead of 600 epochs) and its smaller versions (ResNet-18 and ResNet-34), it can be beneficial to fix a minimum c ≥ τ for the cutoff by sampling in the interval [τ, 1]. The minimum cutoff determines the range at which band mixing will occur. We can remove band interpolation entirely and recover standard Mixup by setting τ = 1. For Resnet-50 with too few training epochs, we found that a good value for the minimum is 0.1, but we found much better results can be achieved with 600 epochs without any modications to Algorithm 1.
Stylized-ImageNet. We confirm that our method indeed increases both accuracy on Stylized ImageNet and the shape bias as shown in table 2. For ResNet-50, Robustmix almost doubles the shape
bias from baseline (from 19 to 37) and improves it by 63% over Mixup; while relative improvements on SIN accuracy are of 72% and 33% respectively over baseline and Mixup. The same observation for EfficientNet-B5 wich improves shape bias by near 50% and SIN accuracy by near 60 % over the baseline.
4.4 ADVERSARIAL PERTURBATIONS
In this section, we consider robustness to adversarial perturbations (Goodfellow et al., 2014). Adversarial perturbations make no visually distinguishable difference to the human eye but make models misclassify examples. Wang et al. (2020) and Yin et al. (2019) have shown that adversarial perturbations of unregularized models disproportionally affect higher frequencies. As we have seen in Figure 5, Robustmix encourages the model to rely more on the lower frequencies to make predictions. Our hypothesis is that this will have a beneficial effect on adversarial robustness. For our experiment we use one of the first methods proposed in the Deep Learning community to construct adversarial examples, the ”Fast Gradient Sign Method”(FGSM) (Goodfellow et al., 2014). The adversarial example is constructed by adding a perturbation proportional to the sign of the gradient of the loss with respect to the input: x = x + sign (∆xJ(θ,x, y)) . As seen in Figure 4, we find Robustmix is more robust than the baselines to this type of adversarial attack.
4.5 ANALYSIS AND DISCUSSION
Low frequency bias In order to quantify the degree to which models rely on lower frequencies, we measure how much accuracy drops as we remove higher frequency information with a low-pass filter. Figure 5 shows that Robustmix is comparatively more robust to the removal of high frequencies. This indicates that models trained with Robustmix rely significantly less on these high-frequency features to make accurate predictions.
5 CONCLUSION
In this paper, we have introduced a new method to improve robustness called Robustmix which regularizes models to focus more on lower spatial frequencies to make predictions. We have shown that this method yields improved robustness on a range of benchmarks including Imagenet-C and Stylized Imagenet. In particular, this approach attains an mCE of 44.8 on Imagenet-C with Efficientnet-B8, which is competitive with models trained on 300× more data. Our method offers a promising new research direction for robustness with a number of open challenges. We have used a standard DCT based low-pass filter on images and L2 energy metric to determine the contribution of each label. This leaves many alternatives to be explored, such as: different data modalities like audio; more advanced frequency separation techniques like Wavelets; and alternative contribution metrics for mixing labels. | 1. What is the focus and contribution of the paper on improving image classifiers' robustness?
2. What are the strengths of the proposed approach, particularly its simplicity and soundness?
3. What are the weaknesses of the paper regarding the novelty of the idea and the necessity of certain components?
4. Do you have any questions about the results or explanations provided in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a new spectral augmentation Robustmix, aiming to improve the robustness of image classifiers. In detail, the presented transformation consists of two preliminary Mixup steps and one final stage mixing low and high frequencies from the images obtained during the two initial steps. The pseudo-label is computed by weighing the labels, produced by Mixup steps, according to the relative amount of energy in low and high bands.
As the experiments show, applying Robustmix during training always leads to improved robustness to noise, corruptions, and adversarial perturbations, though in some cases trading off with a performance on vanilla datasets.
Review
Strengths
To my mind, the presented idea, while being rather simple for implementation, is well-sound with recent advances in the synthesis of signal processing and deep learning. The writing is clear and easy to understand.
Weaknesses.
a. First of all, the idea of spectral mixing itself is not particularly novel. E.g., paper [1] that proposed to use it for the domain transfer problem, and [2] presented the f-mixup procedure for black-box adversarial attacks.
b. Although the approach of selecting
λ
c
according to the relative spectral energy looks inspiring and theoretically motivated, it needs an ablation study. How much does it outperform, e.g. mixing with the weights just proportional to
c
?
c. The influence of two preliminary Mixup steps also was not assessed. Are they really necessary? Does it not suffice just to mix two images in the spectral domain (i.e., set
λ
L
=
0
,
,
λ
H
=
1
)?
Questions.
a. Fig. 4 shows that in some cases stronger perturbation (e.g., the strength of ~5 vs ~12) leads to better test accuracy, especially in Robustmix and Mixup cases. Do the authors have any explanation for such behavior?
References
[1] Yang and Soatto. FDA: Fourier Domain Adaptation for Semantic Segmentation. 2020.
[2] Li et al. F-mixup: Attack CNNs From Fourier Perspective. 2021. |
ICLR | Title
Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks
Abstract
Current artificial neural networks (ANNs) can perform and excel at a variety of tasks ranging from image classification to spam detection through training on large datasets of labeled data. While the trained network may perform well on similar testing data, inputs that differ even slightly from the training data may trigger unpredictable behavior. Due to this limitation, it is possible to design inputs with very small perturbations that can result in misclassification. These adversarial attacks present a security risk to deployed ANNs and indicate a divergence between how ANNs and humans perform classification. Humans are robust at behaving in the presence of noise and are capable of correctly classifying objects that are noisy, blurred, or otherwise distorted. It has been hypothesized that sleep promotes generalization of knowledge and improves robustness against noise in animals and humans. In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as in increasing ANN classification robustness. We compare the sleep algorithm’s performance on various robustness tasks with two previously proposed adversarial defenses defensive distillation and fine-tuning. We report an increase in robustness after sleep phase to adversarial attacks as well as to general image distortions for three datasets: MNIST, CUB200, and a toy dataset. Overall, these results demonstrate the potential for biologically inspired solutions to solve existing problems in ANNs and guide the development of more robust, human-like ANNs.
N/A
Current artificial neural networks (ANNs) can perform and excel at a variety of tasks ranging from image classification to spam detection through training on large datasets of labeled data. While the trained network may perform well on similar testing data, inputs that differ even slightly from the training data may trigger unpredictable behavior. Due to this limitation, it is possible to design inputs with very small perturbations that can result in misclassification. These adversarial attacks present a security risk to deployed ANNs and indicate a divergence between how ANNs and humans perform classification. Humans are robust at behaving in the presence of noise and are capable of correctly classifying objects that are noisy, blurred, or otherwise distorted. It has been hypothesized that sleep promotes generalization of knowledge and improves robustness against noise in animals and humans. In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as in increasing ANN classification robustness. We compare the sleep algorithm’s performance on various robustness tasks with two previously proposed adversarial defenses - defensive distillation and fine-tuning. We report an increase in robustness after sleep phase to adversarial attacks as well as to general image distortions for three datasets: MNIST, CUB200, and a toy dataset. Overall, these results demonstrate the potential for biologically inspired solutions to solve existing problems in ANNs and guide the development of more robust, human-like ANNs.
1 INTRODUCTION
Although artificial neural networks (ANNs) have recently begun to rival human performance on various tasks, ranging from complex games (Silver et al. (2016)) to image classification (Krizhevsky et al. (2012)), ANNs have been shown to underperform when the testing data differs in specific ways even by a small amount from the training data (Geirhos et al. (2018)). This lack of generalization presents two issues when ANNs are utilized in the real world. First, ANNs are often trained on curated datasets of images designed to best capture the image content, whereas in real-world scenarios, they may be tested on disturbed or noisy inputs, not observed during training. Second, ANNs are susceptible to adversarial attacks, or the deliberate creation of inputs designed to fool ANNs that may be imperceptibly different from correctly classified inputs (Szegedy et al. (2013)). These two issues limit ANNs applicability in the real world and present potential security risks when deployed.
There have been two main approaches for investigating ANN robustness: adversarial machine learning and training data manipulation (Ford et al. (2019)). Adversarial machine learning aims to develop novel attack methods which perturb the input minimally while changing the ANN’s classification outcome (Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017); Goodfellow et al. (2014); Athalye et al. (2017); Nguyen et al. (2015)) as well as to design defense mechanisms which prevent these attacks from affecting ANN behavior (Papernot et al. (2016b); Goodfellow et al. (2014); Huang et al. (2015), see Yuan et al. (2019) for review). Training data manipulation research typically examines the impact of changing the input distribution during testing and observing the effect on ANN performance. Geirhos et al. (2018) showed that ANNs trained on images with one type of distortion may not perform well when tested on other types of distortions, even if images with both distortions appear identical to the human eye. Likewise, ANNs trained on unperturbed images exhibit reduced performance when images in the test set are distorted, for example, through horizontal translations, blurring, or the addition of compression artifacts (Dodge & Karam (2016); Vasiljevic et al. (2016); Zhou et al. (2017)). Although it has been proposed that adversarial and manipulation robustness can be increased through various mechanisms during the training phase, such as fine-tuning, recent research has shown that these methods are mostly ineffective or their effectiveness is inconclusive (Geirhos et al. (2018); Uesato et al. (2018); Athalye et al. (2018)).
It has been hypothesized that in the mammalian brain sleep helps to create generalized representations of an input learned during the awake state (Stickgold & Walker (2013); Lewis & Durrant (2011)). Sleep has been identified as being critical for memory consolidation - a process of converting recent memories into long-tern storage (Rasch & Born (2013)). During sleep, there is reactivation of neurons involved in previously learned activity (Stickgold (2005)) and this reactivation is likely to invoke the same spatio-temporal pattern of neuronal firing as the pattern observed during training in the awake state (Wilson & McNaughton, 1994). Sleep reactivation, or replay, serves to strengthen synapses involved in a learned task through local synaptic plasticity, such as Spike Time Dependent Plasticity (STDP). Plastic changes during sleep can increase a subject’s ability to form connections between memories and to generalize knowledge learned during the awake state (Payne et al. (2009)). In one study (Wamsley et al. (2010)), subjects learned to find an exit to a maze in a virtual 3D environment. Subjects who were allowed to sleep exhibited a more complex understanding of the overall shape of the maze (Wamsley et al. (2010)). Using biophysical model of a cortical network Gonzalez et al. (2019) and Wei et al. (2018) showed that sleep dynamics promotes reactivation and helps to create distinct representations for unique memories by devoting synapses to specific memory traces. This body of neuroscience work suggests that a sleep-like activity may be applied to ANNs to enable the network to extract the gist of the training data without being constrained by the statistics of a specific training data set. Our specific hypothesis is that sleep phase could aid in reducing a neural network’s susceptibility to adversarial attacks and to increase generalization performance by reducing the impact that imperceptible input changes can have on the task output.
In this new work, we propose a sleep-inspired algorithm to defend against adversarial attacks as well as to increase ANN robustness to noise. We utilize the notion of sleep from biology and apply an off-line unsupervised ”sleep” phase to modify the parameters of a fully connected ANN. We demonstrate a number of performance improvements over existing defense algorithms, such as finetuning or adversarial retraining and defensive distillation, on both adversarial and noise robustness. The contributions are summarized below:
• We analyze how robust the proposed sleep algorithm is to four different types of adversarial attacks on three different datasets (MNIST, CUB200, and a toy dataset). For most conditions (MNIST, toy dataset), after sleep phase was applied, the attacks consistently resulted in adversarial examples that were more distinct from the original input compared to the adversarial examples designed for the original (before sleep) network. • We illustrate that the sleep algorithm creates a more robust network whereby performance on noisy and blurred inputs is higher compared to control or defensively distilled network and is more robust to the other types of distortions compared to ANNs that are fine-tuned on a single distortion. • We analyze the impact of the sleep algorithm on task representation and demonstrate that the algorithm creates decision boundaries that more closely resemble the true classes, effectively extracting the gist of the data.
2 ADVERSARIAL ATTACKS AND DISTORTIONS
Adversarial attacks aim to create minimal perturbations that, while imperceptible to the human eye, fool ANNs. These attacks range from white-box to black-box attacks, based on how much information they assume the attacker to possess about the network. White-box attacks assume that the attacker has access to the network architecture, training data and weights. These attacks can range from absolute information, such as gradient-based attacks which compute the gradient of the loss with respect to the input (Brendel et al. (2017)), to score-based attacks which only utilize predicted scores of the model. Black-box attacks, which assume no knowledge about the network, solely rely on the decision made in order to craft adversarial examples. Attacks can be (a) targeted such that the attacker aims to create an adversarial example that the network would predict as a certain class or (b) untargeted where the attacker’s goal is simply to cause any kind of misclassification (Biggio & Roli (2018)).
In this work we consider four types of adversarial attacks ranging from white-box to black-box attacks. We assume that the attacker solely wants to cause a misclassification, with no respect to the output class. We present a brief description of each of the four attacks below (see Appendix for examples of images created by these attacks).
Fast Gradient Sign Method (FGSM). FGSM (Goodfellow et al. (2014)) computes the sign of the gradient of the loss function (J) with respect to the original input x using the weights θ of the network and the target labels y.
x′ = x+ sign(∇xJ(θ, x, y)). This represents the direction to change each pixel in the original input in order to increase the loss function. Based on the value of , the corresponding perturbation to the original image can range from small to large. Thus, in this work we use the average of the smallest values of needed to create an adversarial example x′ (misclassified input) for each input in the testing set.
DeepFool. DeepFool (Moosavi-Dezfooli et al. (2016)) is an iterative method which approximates the nearest decision boundary to the input at time t and moves the input xt in that direction to compute xt+1. This process is repeated until a misclassification is produced or the runtime of the simulation is exceeded. For this attack, we measure the L2-norm between the original input x and the adversarial input x′. Thus, successful defenses should result in a high L2-norm for this algorithm.
Jacobian-based Saliency Map (JSMA). JSMA (Papernot et al. (2016a)) aims to craft adversarial examples that minimize the L0-norm of x− x′ by reducing the number of pixels that are altered. In summary, the algorithm computes the gradient, as done in FGSM but for all possible classes. These gradient values represent how changing each pixel contributes to the overall loss function, with large values indicating a significant effect on the loss. These values are used to create a saliency map, where each pixel’s impact on the loss is modelled. The algorithm utilizes this saliency map to alter individual pixels, repeating the gradient and saliency map computation until an adversarial example is created. For this type of attack, we utilize the L2-norm to determine defense success.
Boundary Attack. The Boundary Attack (Brendel et al. (2017)) is a black-box attack which relies solely on the decision of the ANN to craft an adversarial example. Given an input x, a random input x′0 is chosen such that f(x) 6= f(x′0), where f(x) is the label produced by the ANN. In our work, x′0 is chosen from a uniform distribution. The attack starts by moving x ′ 0 toward x until it reaches the point where f(x) = f(x′0), or the decision boundary in between f(x) and f(x ′ 0). From here, the attack consists of two steps: an orthogonal perturbation and a forward perturbation. During the orthogonal perturbation, random points along the hypersphere around f(x′t) are sampled. Those that are adversarial and closer to x than before are added to the queue for forward perturbation. During the forward perturbation, a small step is taken from x′t to x as long as f(x) 6= f(x′t). This process is repeated until a convergence criterion is met. For this attack, we utilize the L2-norm to define defense success.
Distortions. Although not specifically designed to attack an ANN, distortions negatively impact ANN performance. In this work we consider two simple distortion techniques: blurring and Gaussian noise. In first case, we perform 2-D Gaussian filtering with a blur kernel of varying standard deviation in order to blur the images. In second case, we add Gaussian noise with mean 0 and standard deviation σ. These distortions are tested in the networks implementing the proposed sleep algorithm as well as using the adversarial defenses discussed below.
3 ADVERSARIAL DEFENSES
We compare our sleep algorithm with two existing adversarial defenses: defensive distillation and fine-tuning, or adversarial retraining. Defensive distillation (Papernot et al. (2016b)) utilizes two training sessions in order to create a distilled network. First, an initial network is trained on (X,Y ), where X is the training data, and Y is the one-hot encoded training labels. The activation function of this network is changed such that the softmax function of the output layer is computed using a temperature term T as follows:
F (x) = e
zi(X) T∑N−1 l=0 e zl(X) T .
A higher T forces the ANN to produce larger probability values for each class, whereas lower T values support a similar representation as the one-hot encoded labels. After the first network is trained, the output of the network (probability values) is used to train a distilled network with the same softmax-temperature function. Previous work has shown this approach can be successful at preventing some types of attacks (Papernot et al. (2016b)). However, others have shown that it is not successful at defending against modified versions of those attacks or novel attacks in general (Carlini & Wagner (2016; 2017)). Based on the previous work which found that temperature values between 20 and 100 effectively prevent adversarial attacks (Papernot et al. (2016b)), we use a temperature value of T = 50 in our implementation of defensive distillation.
Adversarial retraining aims to fine-tune the network on adversarial examples with the correct labels as a form of regularization. Previous work has shown that adversarial retraining can mitigate the effectiveness of some adversarial attacks. Goodfellow et al. (2014) showed that adversarial retraining can reduce the error rate on MNIST, demonstrating greater ANN robustness after fine-tuning. Likewise, Moosavi-Dezfooli et al. (2016) showed that fine-tuning on DeepFool attacks can reduce the effectiveness of their attacks. However, they observed that fine-tuning on FGSM attacks has negative results, actually increasing the strength of the attack. This suggests that fine-tuning may overfit the network to certain attacks, while failing to extrapolate to other attacks, similar to results shown for generalization in ANNs (Geirhos et al. (2018)). For the adversarial retraining procedure presented here, we train the network on the original input and then fine-tune the network on various adversarial attacks with a reduced learning rate.
4 SLEEP ALGORITHM
The basic intuition behind the sleep algorithm is that a period of offline activity, whereby network weights are modified according to an unsupervised learning algorithm, allows the parameters of the network to become more reflective of the underlying statistics of the task at hand, while not overfitting the statistics of the training data. The pseudocode is presented in Algorithm 1. In short, an ANN is trained using stochastic gradient descent and the standard backpropagation algorithm (exact parameters used for each of the datasets are shown in Table 2). After training, the network structure is converted into a spiking neural network (SNN). After building the SNN, we run a sleep phase which modifies the network connectivity based on spike-timing dependent plasticity (STDP). After the sleep phase, the SNN network is converted back into the ANN and testing is performed.
4.1 SPIKING NEURAL NETWORKS
SNNs seek to model closely temporal brain dynamics. In short, SNNs are composed of spiking neurons and model the information transformation and the dependence on exact timing of spikes that occurs in biological networks (Ghosh-Dastidar & Adeli (2009)). Individual neuron models can range from simple integrate-and-fire type neurons which sum their inputs and produce an output (spike) if this exceeds some firing threshold to more complex Hodgkin-Huxley type neurons which model sodium-, potassium-, and chloride-channel kinetics (Abbott & Kepler (1990)). Recent work has shown that a near loss-less conversion between ANNs and SNNs can be achieved by propagating activity through a spiking neural network for a given input and counting the number of times that each output neuron fires (Diehl et al. (2015)).
To convert an ANN to SNN (Lines 1-3 of pseudocode), we assume the ANN utilizes ReLU neurons with no bias. This assumption is made so that the output neuron’s activation can be treated as a firing rate, either zero or positive, and that the thresholds of all neurons in a given layer are of the same scale. The weights from the ANN are directly mapped to the SNN. In our analysis, each unit in the SNN is modelled as an integrate-and-fire type neuron, computing the following equation:
τm dv
dt = −v(t) + N∑ i=1 wi ∗ s(i).
Here, τm represents the decay constant of the membrane potential, v is the voltage at a given time, wi is the weight connecting from neuron i, and s(i) is the spiking activity of neuron i, either 1 or 0.
4.2 PLASTICITY AND SLEEP
The key advantage of using a SNN is that biologically inspired training rules can be applied while the network is driven by noisy input. Empirical data suggest that the brain uses spike-timing dependent plasticity (STDP) (Song et al., 2000), where weight updates depend on the relative timing of preand post-synaptic spikes. It has been shown that STDP results in balanced activity, where all neurons fire in equal proportions (Song et al. (2000)). Here we utilize a modified version of STDP: if a presynaptic spike induces a post-synaptic spike, then the weight between these neurons is increased. If a post-synaptic spike occurs, but the pre-synaptic neuron does not spike, then the corresponding weight is decreased (in this case postsynaptic spiking may occur because of spiking in other neurons connecting to that post-synaptic neuron).
The sleep training phase we propose here can be described as following. First, inputs to each neuron of the input layer must be presented as spiking activity in order to propagate activity from the input layer to the hidden layers of the network. We convert inputs (real-valued pixel intensities or features) to spikes by defining a maximum firing rate fmax with units spikessec and computing a Poisson-distributed spike raster, such that inputs with higher values (i.e. brighter pixels) will have higher rate than inputs with lower values, with no spike rates exceeding fmax. Next, activity is propagated through the network as spikes and the STDP rule is applied to update weights. In biological networks, increase of synaptic strength during slow-wave sleep leads to characteristic patterns of activity with repetitive periods of elevated firing (Up-states), when previously learned memory traces are spontaneously replayed. To simulate this dynamics, synaptic weights in SNN are up-scaled to induce high firing rates in later layers. Other important parameters include the threshold for each layer and the length of sleep. The parameters used for each dataset are presented in Table 3.
4.3 EXPERIMENTS AND DATASETS
Below, we describe the general experimental setup as well as the datasets tested. First, we trained a control ANN using the training set for each of the main datasets used in this study. Next, we created a defensively distilled network using T = 50 for the temperature parameter to create the second test network. Then, we fine-tuned the control ANN on a specific attack or distortion method to create the third test network. Finally, we converted the control ANN to an SNN and applied the sleep algorithm as described above to create the fourth test network. We created adversarial examples for each of these four networks using the attacks we described above (fine-tuned networks are tested on the attacks they were fine-tuned on). Then, we analyze how successful each attack is to fool each of the four networks using the metrics defined above. For generalization (blur and noise), we performed the same setup as above creating four different networks. We then tested each network on varying levels of distortion. We tested networks fine-tuned on blurred and noisy images to measure how performance generalizes across distortion methods. We averaged performance across a minimum of three networks for each attack and distortion.
We used three datasets to compare performance: Patches (a toy dataset created simply for analysis), MNIST (LeCun et al. (1998)), and CUB-200 (Welinder et al. (2010)). Patches consists of four binary images arranged in a 10x10 square. Each image has its own label (1-4), and consists of 25 bright pixels (value set to 1) and 75 dark pixels. The overlap of bright pixels among the four images (see Appendix] Appendix) is chosen such that the task is not trivial. The MNIST dataset consists of 70,000 28x28 greyscale images of handwritten digits, with 60,000 in the training set and 10,000 in the testing set. CUB-200 is a high resolution dataset of images of birds with 200 bird species,
Algorithm 1 Sleep: 1: procedure CONVERTANNTOSNN(nn) 2: Map the weights from (nn) with ReLU units to network of integrate-fire units (snn) 3: Apply weight normalization and return scale for each layer ([24]) return snn, scales 4: procedure CONVERTSNNTOANN(nn) 5: Directly map the weights from integrate-fire network (nn) to ReLU network (ann) return ann
6: procedure SLEEP(nn, I, scales) . I is input 7: Initialize v (voltage) = 0 vectors for all neurons 8: for t← 1 to Ts do . Ts - duration of sleep 9: S(1, t)← Convert input I to Poisson-distributed spiking activity 10: for l← 2 to n do . n - number of layers 11: v(l, t)← v(l, t− 1) + (scales(l − 1)W(l, l − 1)S(l − 1, t)) . W(l,l-1) - weights 12: S(l, t)← v(l, t) > threshold(l) . Propagate spikes
13: W(l, l − 1)← { W(l, l − 1) + inc if S(l, t) = 1&S(l − 1, t) = 1 W(l, l − 1)− dec if S(l, t) = 1&S(l − 1, t) = 0 . STDP
14: procedure MAIN 15: Initialize neural network (ann) with ReLU neurons and bias = 0. 16: Train ann using backpropagation. 17: snn, scales = ConvertANNtoSNN(ann) 18: snn = Sleep(snn, Training data X , scales) 19: ann = ConvertSNNtoANN(snn)
with very few ( 30) images per class. For this dataset, we used previously extracted ResNet-50 embeddings, where ResNet-50 was pre-trained on ImageNet (He et al. (2016)). For CUB-200, we do not report results for blurring, since we are using extracted features, not images.
5 RESULTS
We evaluate the sleep algorithm in two settings: (1) Adversarial attacks designed to fool neural networks and (2) generalization distortions designed to reflect imperfect viewing conditions or other types of noise. For adversarial attacks (other than FGSM), we utilize the following metric to evaluate the success of each defense. Let x′i be the adversarial example created for input xi. The total score SA for an attack is the median squared L2-distance for all samples, where N is the dimension of the space:
SA = median( 1
N ‖x′i − xi‖ 2 2).
For FGSM, we define the following metric which computes the median minimum noise level needed to produce a misclassification across all samples:
SFGSM = median(min( i)) s.t. f(xi + i ∗ x′i) 6= f(x).
For MNIST and CUB-200, we evaluate the attacks on all examples in the testing set. Examples that the networks get wrong before the attack was implemented are discarded from the analysis (in these cases ‖x′i − x‖ 2 k = 0 and i = 0 for all attacks). For FGSM and distortions, we also include plots of classification accuracy as a function of noise level. For DeepFool and JSMA, we report adversarial attack accuracy (number of examples where f(x) = y and f(x′) 6= f(x), where y is the correct label, over number of examples tested). Note that these algorithms would always produce an adversarial example if allowed to run forever. However, due to computational limitations, we included a run-time limits on the number of iterations for these algorithms (see Appendix). Thus, a lower adversarial attack accuracy indicates that the attack would need more iterations to run to reach 100% accuracy. This is a similar measure as distance since more iterations would result in more distinct adversaries for all attacks implemented and the updates at each iteration have the same magnitude for each defense.
A B C
Patches MNIST CUB-200
5.1 ADVERSARIAL ATTACKS
Here we report the scores for all different attacks and for the all datasets. For the FGSM attack, the sleep algorithm increases the median minimum noise needed for misclassification for all three datasets compared to the control network (also see Figure 1). For MNIST dataset, the amount of noise needed to fool the network after the sleep algorithm was almost double of that needed for either the fine-tuning or defensive distillation approaches. For the Patches dataset, both defensive distillation and fine-tuning increase the robustness of the network. However, on CUB-200, only fine-tuning and sleep were able to defend, albeit marginally, against the FGSM attack. Looking at the classification accuracy of the network as a function of noise added ( , Figure 1), we observe that
in the Patches and CUB-200 dataset, sleep tends to have higher classification accuracy than the other methods for epsilon < 0.1. After this point, sleep tends to have equal classification accuracies as compared to the other methods. For MNIST, the baseline classification accuracy on the original test set decreases slightly compared to the other methods (80% after sleep). However, the performance remains high longer than for the other defense methods on images that were correctly classified. We observed that performance continued to drop after a sufficiently large amount of noise was added. This is biologically plausible as adding more noise to an image should result in image degradation and misclassifications. In sum, these results indicate that a sleep phase can successfully mitigate FGSM, more so than a control network.
For DeepFool, sleep has a significant effect on the defense score on the MNIST dataset, both reducing the attack success rate and increasing the distance between the adversarial example and the original input by an order of magnitude. For Patches and CUB-200 this effect is less pronounced, with fine-tuning or the control network performing better. We hypothesize that sleep was ineffective in preventing the DeepFool attack in tasks with very few exemplars per class (Patches) or a large number of classes (CUB-200). In CUB-200, there is a large number of classes so the distance between the input and the nearest decision boundary is smaller (this is supported by the fact that JSMA, an L0 attack, does worse than DeepFool for CUB-200 and vice versa for MNIST, control networks). In this case, sleep is unable to move the decision boundary of one class without impinging on the decision space of another class. In MNIST, where the decision space for one class is presumably larger, sleep can alter decision boundaries in a way that has a minimal effect on other classes.
Sleep successfully increases the network’s robustness to the JSMA attacks on MNIST and Patches, reducing the attack success rate in the case of MNIST and increasing the distance needed to create an adversary for Patches. On CUB-200, there is a marginal reduction in the adversarial attack accuracy compared to the control network. Defensive distillation and fine-tuning also reduce JSMA’s effectiveness. However, for these two defenses, in the case of MNIST, the networks were capable of finding an adversary for a higher percentage of the testing set. Thus, the effect of changing a small number of important pixels is mitigated after running the sleep algorithm.
For the Boundary Attack, we found that no defense mechanism helps compared to the control in decreasing the attack’s effectiveness on the MNIST dataset. However, for CUB-200 and Patches, the sleep algorithm results in a higher defense score than that for the control network. This lends support to the idea that sleep favorably alters decision boundaries so that it becomes harder to find an adversarial example that is close to the original image after the sleep phase. This also suggests that sleep is not simply obfuscating gradients, which has been a common criticism of several adversarial defenses (Athalye et al. (2018)), which are tested on white-box attacks. In fact, given the long run-time for convergence of this algorithm, if we define a threshold for adversarial attack success (L2-norm > 1), then sleep successfully defends against this attack on the MNIST dataset (see Table 4).
Why does sleep phase help? It has been shown that sleep tends to promote an increase in stronger weights while pruning weaker weights, thus increasing the width of the weights’ distribution (Gonzalez et al., 2019). This results in the consolidation of strong memories at the cost of diminishing weak memories. From this point of view, a memory is a subspace or abstraction in the decision space corresponding to a given class. Sleep may result in enlarging the subspace the network allocates to a stronger category while shrinking weaker ones (Figure 5A). The process of strengthening the strong memory also results in making it robust and noise invariant, as seen in Figure 5B where the first 8 categories (numbers 0-7) are strengthened and become more invariant to the FGSM attack, while the last two digits are essentially forgotten and the network cannot confidently predict exemplars from these classes (Figure 5C). If the noise is less targeted, as in the case of random noise or blurring, sleep does not need to alter the decision space as much to produce better generalization and can maintain a high baseline accuracy, as we demonstrate in the next section.
5.2 GENERALIZATION
Figure 2 shows the network performance for noisy and blurry distortions of data for MNIST (A) as well as noisy distortions for the CUB-200 feature embeddings (B, see Figure 3 for results on Patches). Overall, fine-tuning on an image distortion results in the best performance for that specific distortion. However, as was noted (Geirhos et al. (2018)), fine-tuning on a specific distortion does
MNISTA B
CUB-200
not extend to other types of distortions. In our analysis, fine-tuning the network on blurred MNIST images results in high performance (> 80%) on blurred images. However, for noisy images, this performance was only marginally above the control network. The sleep algorithm increased performance for both distortion methods, since this approach is not tailored to any one representation of the training set.
Finally we tested how sleep increases robustness on blur and noise distortions. In biological systems, sleep increases generalization through replay of memories learned during awake which leads to changes in synaptic weights. These changes entail both an increase in synaptic weights associated with a specific task and pruning of synapses involved in other tasks (Gonzalez et al., 2019; Tononi & Cirelli, 2006). Figures 9 and 10 show that correlations among like digits in the hidden layers of the network are greater after applying sleep than before for noisy and blurred images. Likewise, pairs of different digits usually become decorrelated after sleep, suggesting synaptic pruning. We also show that both normalized spiking activity and activations of digit-specific neurons are higher after sleep than before (Figures 11 and 12, see Appendix for details). These results suggest that the sleep algorithm increases robustness through biologically plausible learning mechanisms involving replay of relevant activity during sleep phase.
6 CONCLUSIONS AND FUTURE DIRECTIONS
In this work, we show that a biologically inspired sleep algorithm can increase an ANN’s robustness to both adversarial attacks and general image distortions. The algorithm augments the normal (e.g., back-propagation based) training phase of an ANN with an unsupervised learning phase in the equivalent SNN modelled after how the biological brain utilises sleep to improve learning. We hypothesize that the unsupervised sleep phase creates more natural feature representations which in turn lead to more natural decision boundaries, thus increasing the robustness of the network. Although this robustness may come at a cost of overall accuracy, it has been shown that robustness may have multiple important benefits, such as more salient feature representations as well as invariance to input modifications (Tsipras et al. (2018)). We also show that the trade-off between robustness and accuracy does not always occur, particularly for image distortions such as noise or blur. Future work includes converting the sleep algorithm into a regularization technique to be applied in more
standardized machine learning frameworks as well as understanding the theoretical basis for the beneficial role of spike based plasticity rules in increasing network robustness.
7 ACKNOWLEDGEMENTS
This work was supported by the Lifelong Learning Machines program from DARPA/MTO (HR0011-18-2-0021) and ONR (MURI: N00014-16-1-2829).
8 APPENDIX
8.1 TRAINING PARAMETERS
Here, we define the neural network parameters used for each of the three datasets as well as the sleep, defensive distillation, and fine-tuning parameters. Table 2 shows parameters used to train each of the control networks discussed in the paper. All neural networks were trained with ReLU neurons. Table 3 shows the parameters used during sleep for each of the three datasets. Note that these parameters for MNIST and CUB-200 were chosen by running a genetic algorithm to maximimize performance on the FGSM attack (performance was determined based on the training set so as not to overfit to the test set). For the other three attacks, parameters that maximized FGSM performance were used. Also, for noise and blur generalization, different parameters were chosen (not shown here).
For the defensively distilled networks tested in the paper, we first train an initial network using a temperature of 50. Then, we use the training set to compute soft labels and finetune the initial network on these soft labels for the same number of epochs and with the same learning rate.
For the fine-tuned networks, we take the control networks trained with the parameters shown in Table 2. The learning rate is reduced to 0.05 and the network is fine-tuned on a mixture of either adversarial attacks, blur or noise and the original images/features. For CUB-200, we perform finetuning for 10 epochs.
8.2 PATCHES ANALYSIS
The Patches dataset represents an easily interpretable example where we can understand what happens to the weights after sleep. Figure 3A shows an example of the dataset. Here, we have 4 images each belonging to 4 different classes. 25 pixels are whitened in each image and the remaining 75 pixels are dark. There is a 15 pixel overlap, so that weights connecting from input to output layer must take this into account in order to separate the images. Figure 3B illustrate the blur and noise distortions tested for this dataset and Figure 3C shows the results for the blur and noise distortions.
After the network is trained, we can analyze the weights connecting from each of the 100 input neurons to the 4 output neurons (see Figure 4, top row). We theorize that optimally robust behavior would occur when weights connecting from ON-pixels are positive, weights connecting from
overlapping pixels are near 0, and weights connecting from OFF-pixels are negative. In this case, changing the value of overlapping pixels will have no effect on classification. Changing the value of OFF-pixels will cause the network to predict another class, where OFF-pixels may be ON-pixels or indicative of that class. Changing the value of ON-pixels will only have a negative impact if the brightness of the pixel is reduced significantly. Thus, in this circumstance, the network should behave robustly.
In the control network, we observe that weights connecting from ON-pixels (pixel-value = 1) increase while weights connecting from OFF-pixels remain at 0. Weights connecting from overlapping pixels remain near 0 or positive. Defensive distillation causes some weights connecting from overlapping pixels to decrease, likely because the soft labels used in defensive distillation cause overlapping pixel units to alter the probability values computed by the network in such a way that does not truly reflect the impact of the overlapping pixels. In the fine-tuned networks (both on blurred images and noisy images), we observe an increase in ON-pixel weights and an increase in noisiness of OFF-pixel weights. Likewise, in the sleep network, OFF-pixel weights become negative while ON-pixel weights remain the same. In these cases, robustness is increased as weights become more similar to our hypothesized ideal weights. Essentially, the magnitude of input changes need to change classification increase since the spread between ON-pixel weights and OFF-pixel weights increases. We quantify the spread in weights by taking the difference between the average weight connecting from ON-pixels and the average weight connecting from OFF-pixels. This represents the mean input that each correct output neuron receives. This result is shown in Figure 3D. Of note is that this weight spread is increased for both the sleep and finetuning-noise network, suggesting that these defenses bring the weights closer to their ideal values for computing robustness.
8.3 ADVERSARIAL ATTACKS
Here, we describe the general approach for implementing DeepFool, JSMA, and the Boundary Attack discussed in the paper. We also show examples of adversaries created for each of the defense networks from these attacks.
A
Boundary Attack
JSMA
DeepFool
C
B
Control
Def. Dist.
Finetuning
Sleep
Control
Def. Dist.
Finetuning
Sleep
Control
Def. Dist.
Finetuning
Sleep
Figure 6: A) DeepFool adversarial examples for each defense. The network’s prediction is shown above each image. B) JSMA. C) Boundary Attack.
DeepFool. DeepFool (Moosavi-Dezfooli et al. (2016)), as mentioned above, is an iterative algorithm that, at each iteration, aims to move the adversarial example in the direction of the closest decision boundary until it results in a misclassification. We based our implementation of that in Rauber et al. (2017). We stopped running the algorithm when either an adversarial example is found or when 100 iterations have passed. Examples of DeepFool attacks on the MNIST dataset are shown in Figure 6A. At each iteration we compute a linear approximation of the loss function and take a step in the
direction that would be result in a misclassification. The equations used and pseudocode can be found in the original DeepFool paper.
JSMA. JSMA is also an iterative algorithm which computes the pixel that would change the loss function the most at each algorithm and changes this pixel, until a misclassification is produced. For this method, we set a run-time limit of 500 iterations. We also remove a pixel from the saliency map when it has beeen updated seven times, so the algorithm can focus on other pixels. We set the change to each pixel at a constant value, 0.1. This represents how much each pixel is updated (in the direction that results in a misclassification) at each iteration. Pseudocode can be found in the original publication (Papernot et al. (2016a)). We show examples of adversaries created by JSMA in Figure 6B.
Boundary Attack. The Boundary Attack (Brendel et al. (2017)) starts with an adversarial example and moves it closer to the decision boundary of the correct class. At each step of the algorithm, the method performs orthogonal and forward perturbations to move the adversary closer to the original image, thus reducing the distance between the adversary and the original image. We set both a distance convergence criterion (L2-norm = 1e-7) and a run-time limitation on the attack (1000 iterations). Example attacks are shown in Figure 6C. We note that sometimes the algorithm does not successfully produce an ”imperceptible” adversarial example and instead produces a noisy output (the starting condition is a noisy image). If we define a threshold defining a successful adversarial attack (L2-Norm > 1), then we observe the results for MNIST in Table 4.
8.4 GENERALIZATION ANALYSIS
In this section, we analyze how sleep can aid in increasing ANN robustness. In biological networks, sleep extracts the gist of a task through replay (Lewis & Durrant (2011)). We hypothesized that our sleep algorithm works in the same manner. First, we tested the ability of the sleep network to decorrelate distinct inputs by analyzing the effect of running sleep and testing on our two distortion techniques (see Figure 7).
We computed the correlations of network activities in each of the hidden layers of the network before and after implementing our defense methods. For each pair of digits, we computed the average correlation of layer activities in the undistorted (Figure 8), noisy (Figure 9), and blurred (Figure 10) conditions. Each figure reports the difference in digit pairwise correlations between
the defense method and the control network for each set of inputs. For our sleep network, it is apparent that in layer 2 and layer 3, the correlations of the same digits (the diagonal) increases after sleep. Additionally, the correlation of distinct digits typically experiences negative change, representing decorrelation of distinct inputs. This analysis holds for defensive distillation and both of the fine-tuned networks. This suggests that the ANN representation of different exemplars of the same digit becomes more similar after sleep or after any of the defense networks when compared to the control. This is not simply due to an increased overlap of all inputs, since exemplars of different digits become decorrelated after applying a defense method.
Next, we performed the same correlation analysis on noisy and blurred images to see how the representation of distorted images changes after applying a distortion method. First, we note that finetuning on noisy images results in stronger correlation of the same (noisy) digit but weaker correlations of different (noisy) digits, as noted above. However, fine-tuning on blurred images does not have as strong an effect. Second, sleep seems to have a beneficial effect on the correlation matrices for both blurred and noisy images (comparing the right column of Figures 9 and 10). This illustrates the beneficial role of sleep in creating distinct representations of digits, where different neuronal ensembles encode different digits. This change in representation should result in increased robustness since changes to the input must be larger in order to recruit neuronal ensembles that represent other digits.
On top of decorrelating the representation of distinct memories by pruning synapses, biophysical modelling suggests that sleep can also aid in strengthening connections thus making stronger the response of primary neurons involved in memory recall (Gonzalez et al. (2019)). To test this hypothesis in our networks, we analyzed the firing rate and activations of digit-specific neurons before and after sleep. Before describing the analysis, we would like to note that SNNs can be used to performn classification and a near loss-less conversion between ANNs and SNNs has been achieved on the MNIST task (Diehl et al. (2015)). To perform classification, a digit is presented (as a Poisson spike train) to the network and spikes are propagated throughout the network for a given time period (or number of presentations of the input). Analyzing network activity in the spiking domain can be easier than in the activation domain (ANNs) since spikes are oftentimes easier to interpret than neuronal activations.
For this reason, we first analyze how spike rates of digit-specific neurons change before and after sleep in the spike domain. To do this we present all images of a specified digit to the spiking network and count the number of spikes from each neuron (holding the weights constant). We define digitspecificity by looking at the 100 neurons with the highest firing rates in layer 2. In Figure 11, we show that the normalized firing rate of these neurons usually increases after sleep (normalized by dividing by the maximum firing rate observed from the SNN).
Next, we perform the same analysis in the activation domain. Again, we define digit-specific neurons by looking at the top 100 neurons with the highest activation for a specific digit. We look at the normalized mean activations of these neurons before and after sleep and note that for all digits this value is higher after sleep than before sleep (Figure 12). This suggests that the neurons in the network are responding more strongly to the presentation of the same digit, thus increasing the robustness of the network as more noise must be added in order to counter the effect of this stronger response. This also suggests that our algorithm works in a biologically plausible way: both by decorrelating distinct inputs and increasing the strength of similar inputs.
SleepDefensive Distillation Finetune Noise Finetune Blur | 1. What is the main contribution of the paper regarding deep neural networks?
2. What are the strengths and weaknesses of the proposed sleep algorithm compared to other defense mechanisms?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any inconsistencies or errors in the notation used in the paper regarding adversarial attacks?
5. Can the authors provide more details about the reasoning behind choosing T=50 for distillation?
6. How can the format of the spike trains be improved, and what is the intuition behind scaling weights to induce high firing rates in later layers?
7. How do the results of the sleep algorithm compare to baseline defenses on different datasets and attack types?
8. Are there any suggestions for additional experiments or improvements to the current method? | Review | Review
Disclosure on reviewer's experience: I am not an expert on adversarial attack methods or defenses, but I am well read in the general literature on robustness and uncertainty in deep neural networks.
The authors present a biologically inspired sleep algorithm for artificial neural networks (ANNs) that aims to improve their generalization and robustness in the face of noisy or malicious inputs. They hypothesize that "sleep" could aid in generalization by decorrelating noisy hidden states and reducing the overall impact of imperceptible perturbations of the input space. The proposed sleep algorithm broadly involves 1) converting the trained ANN to a "spike" neural network (SNN), 2) converting the input signal (pixels) to a Poisson distributed "spike train" where brighter pixels have higher firing rates than darker pixels, 3) propagating the neuronal spikes through the SNN, updating weights based on a simplified version of spike-timing-dependent plasticity (STDP), and 4) converting the network back to an ANN after the sleep phase has finished. They present a detailed comparative study spanning three datasets, four types of adversarial attacks and distortions, and two other baseline defense mechanisms, in which they demonstrate significant improvements (in some cases) of the sleep algorithm over the baselines.
The core concept behind the authors' work is novel and interesting, and the experimental design is thorough and well controlled. Although the results are (I would argue) somewhat mixed, they are nonetheless positive enough to encourage more work in applying "sleep" and other relevant ideas from neuroscience to the problem of robustness in deep neural networks. I have some questions and concerns which I will detail per-section below, but overall, I believe that this paper is a valuable contribution to the literature and should be accepted once the authors have made a few necessary revisions.
Section 1: Introduction
"We report positive results for four types of adversarial attacks tested on three different datasets (MNIST, CUB200, and a toy dataset) ..."
It's debatable whether or not the results from the CUB-200 dataset are positive. The sleep algorithm fails to outperform the baselines for each attack type (except for an almost negligible advantage in accuracy on JSMA) and barely even outperforms the control network in most cases (2/4 attacks it actually underperforms the control). I think the authors should consider rephrasing this statement to better reflect the actual results.
Section 2: Adversarial Attacks and Distortions
FGSM: The notation used here is somewhat inconsistent with the source paper. Goodfellow et al use epsilon to denote what I think the authors call eta, and call the second term, epsilon*sign(grad(J)), eta. Furthermore, the authors state that "this represents the direction to change each pixel in the original input in order to decrease the loss function." But this doesn't make sense. An adversary should want to *increase* the loss function enough to cause a misclassification. Goodfellow et al use this expression to formulate a L1-like regularization term and describe the training procedure "minimizing the worst case error when the data is perturbed by an adversary", which seems more sensible. This section should be rewritten to be more consistent with the source.
Section 3: Adversarial defenses
Regarding distillation: "We use T=50 to compare with the sleep algorithm"
The authors should elaborate a bit more on the reasoning for this choice. It seems very arbitrary.
Sectioin 4: Sleep algorithm
1. Algorithm 1: Why is line 9 inside of the for loop? It doesn't seem to be at all dependent on t. One would expect the input to only need to be converted once. Additionally, in lines 11-13, the l's in W(l,l-1) and similar should be unbolded. It's confusing that the format changes (unless I am missing something and it's actually a different variable).
2. Spike trains should be more rigorously defined, preferably with formalized notation. It's a bit unclear exactly what they are from the current text. Are they just parameters for a Poisson? Or outputs from a poison over T time steps? Or something else?
3. "weights are scaled by a parameter to induce high firing rates in later layers"
It would be good to include more details on this parameter, how the values are chosen, and the intuition behind this idea. I assume it's because of higher level feature representations in later layers of deep neural networks.
Section 5: Results
1. It's confusing that sometimes accuracy refers to classification accuracy and sometimes adversarial attack accuracy. I would recommend assigning a different name to the latter, or making sure that a qualifier precedes every reference to "accuracy" in this section.
2. In the second section of the results table (which is missing a label), why is the JSMA value for Defensive Distillation bolded? The distance measures for both the control network and for fine-tuning are higher. It seems like fine-tuning should be the one bolded.
3. Figure 1: caption is incorrect; it states "adversarial attack accuracy" and it should be "classification accuracy", otherwise the plots make no sense.
4. "we observe that in the Patches and CUB-200 dataset, sleep has beneficial results in moving the accuracy function above the other defense methods"
It should be noted that this is only true for eta < 0.1. After that, sleep and the control both converge to 50% accuracy. Also this sentence should be reworded to be less visual and more quantitative (e.g. sleep tends to have higher median accuracy scores than the other methods for eta < 0.1).
5. "We observe that performance continued to drop after a sufficiently large amount of noise was added"
More than that, the other methods converged to a small band of accuracy values; sleep continued to deteriorate. This is a significant difference. It would be a good idea to re-run this experiment with a binary classification problem (e.g. only two digits of MNIST) and see if this phenomenon still occurs. Then, the noisy sleep classifier predictions could simply be inverted to get improved accuracy scores.
6. In the analysis of JSMA, as noted before,, it's rather dubious to claim that sleep had any kind of significant effect on the attack success rate (or distance) for CUB-200. I would rewrite this section to better represent the results.
7. Figure 2 formatting: Legend is overflowing out of the first figure. Additionally, the legend colors should be made to match across all three figures, and the legend should either appear in all three (if necessary for some reason) or only in one.
8. Figure 2: The caption is incomplete and possibly incorrect. It's not clear why the first and last figures differ from each other, and the caption does not indicate this. The caption also only mentions two datasets, even though it says "for the following three datasets".
Appendix:
General formatting needs improvement. A lot of figures are off-centered, text misaligned, missing axis labels, etc. |
ICLR | Title
Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks
Abstract
Current artificial neural networks (ANNs) can perform and excel at a variety of tasks ranging from image classification to spam detection through training on large datasets of labeled data. While the trained network may perform well on similar testing data, inputs that differ even slightly from the training data may trigger unpredictable behavior. Due to this limitation, it is possible to design inputs with very small perturbations that can result in misclassification. These adversarial attacks present a security risk to deployed ANNs and indicate a divergence between how ANNs and humans perform classification. Humans are robust at behaving in the presence of noise and are capable of correctly classifying objects that are noisy, blurred, or otherwise distorted. It has been hypothesized that sleep promotes generalization of knowledge and improves robustness against noise in animals and humans. In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as in increasing ANN classification robustness. We compare the sleep algorithm’s performance on various robustness tasks with two previously proposed adversarial defenses defensive distillation and fine-tuning. We report an increase in robustness after sleep phase to adversarial attacks as well as to general image distortions for three datasets: MNIST, CUB200, and a toy dataset. Overall, these results demonstrate the potential for biologically inspired solutions to solve existing problems in ANNs and guide the development of more robust, human-like ANNs.
N/A
Current artificial neural networks (ANNs) can perform and excel at a variety of tasks ranging from image classification to spam detection through training on large datasets of labeled data. While the trained network may perform well on similar testing data, inputs that differ even slightly from the training data may trigger unpredictable behavior. Due to this limitation, it is possible to design inputs with very small perturbations that can result in misclassification. These adversarial attacks present a security risk to deployed ANNs and indicate a divergence between how ANNs and humans perform classification. Humans are robust at behaving in the presence of noise and are capable of correctly classifying objects that are noisy, blurred, or otherwise distorted. It has been hypothesized that sleep promotes generalization of knowledge and improves robustness against noise in animals and humans. In this work, we utilize a biologically inspired sleep phase in ANNs and demonstrate the benefit of sleep on defending against adversarial attacks as well as in increasing ANN classification robustness. We compare the sleep algorithm’s performance on various robustness tasks with two previously proposed adversarial defenses - defensive distillation and fine-tuning. We report an increase in robustness after sleep phase to adversarial attacks as well as to general image distortions for three datasets: MNIST, CUB200, and a toy dataset. Overall, these results demonstrate the potential for biologically inspired solutions to solve existing problems in ANNs and guide the development of more robust, human-like ANNs.
1 INTRODUCTION
Although artificial neural networks (ANNs) have recently begun to rival human performance on various tasks, ranging from complex games (Silver et al. (2016)) to image classification (Krizhevsky et al. (2012)), ANNs have been shown to underperform when the testing data differs in specific ways even by a small amount from the training data (Geirhos et al. (2018)). This lack of generalization presents two issues when ANNs are utilized in the real world. First, ANNs are often trained on curated datasets of images designed to best capture the image content, whereas in real-world scenarios, they may be tested on disturbed or noisy inputs, not observed during training. Second, ANNs are susceptible to adversarial attacks, or the deliberate creation of inputs designed to fool ANNs that may be imperceptibly different from correctly classified inputs (Szegedy et al. (2013)). These two issues limit ANNs applicability in the real world and present potential security risks when deployed.
There have been two main approaches for investigating ANN robustness: adversarial machine learning and training data manipulation (Ford et al. (2019)). Adversarial machine learning aims to develop novel attack methods which perturb the input minimally while changing the ANN’s classification outcome (Moosavi-Dezfooli et al. (2016); Carlini & Wagner (2017); Goodfellow et al. (2014); Athalye et al. (2017); Nguyen et al. (2015)) as well as to design defense mechanisms which prevent these attacks from affecting ANN behavior (Papernot et al. (2016b); Goodfellow et al. (2014); Huang et al. (2015), see Yuan et al. (2019) for review). Training data manipulation research typically examines the impact of changing the input distribution during testing and observing the effect on ANN performance. Geirhos et al. (2018) showed that ANNs trained on images with one type of distortion may not perform well when tested on other types of distortions, even if images with both distortions appear identical to the human eye. Likewise, ANNs trained on unperturbed images exhibit reduced performance when images in the test set are distorted, for example, through horizontal translations, blurring, or the addition of compression artifacts (Dodge & Karam (2016); Vasiljevic et al. (2016); Zhou et al. (2017)). Although it has been proposed that adversarial and manipulation robustness can be increased through various mechanisms during the training phase, such as fine-tuning, recent research has shown that these methods are mostly ineffective or their effectiveness is inconclusive (Geirhos et al. (2018); Uesato et al. (2018); Athalye et al. (2018)).
It has been hypothesized that in the mammalian brain sleep helps to create generalized representations of an input learned during the awake state (Stickgold & Walker (2013); Lewis & Durrant (2011)). Sleep has been identified as being critical for memory consolidation - a process of converting recent memories into long-tern storage (Rasch & Born (2013)). During sleep, there is reactivation of neurons involved in previously learned activity (Stickgold (2005)) and this reactivation is likely to invoke the same spatio-temporal pattern of neuronal firing as the pattern observed during training in the awake state (Wilson & McNaughton, 1994). Sleep reactivation, or replay, serves to strengthen synapses involved in a learned task through local synaptic plasticity, such as Spike Time Dependent Plasticity (STDP). Plastic changes during sleep can increase a subject’s ability to form connections between memories and to generalize knowledge learned during the awake state (Payne et al. (2009)). In one study (Wamsley et al. (2010)), subjects learned to find an exit to a maze in a virtual 3D environment. Subjects who were allowed to sleep exhibited a more complex understanding of the overall shape of the maze (Wamsley et al. (2010)). Using biophysical model of a cortical network Gonzalez et al. (2019) and Wei et al. (2018) showed that sleep dynamics promotes reactivation and helps to create distinct representations for unique memories by devoting synapses to specific memory traces. This body of neuroscience work suggests that a sleep-like activity may be applied to ANNs to enable the network to extract the gist of the training data without being constrained by the statistics of a specific training data set. Our specific hypothesis is that sleep phase could aid in reducing a neural network’s susceptibility to adversarial attacks and to increase generalization performance by reducing the impact that imperceptible input changes can have on the task output.
In this new work, we propose a sleep-inspired algorithm to defend against adversarial attacks as well as to increase ANN robustness to noise. We utilize the notion of sleep from biology and apply an off-line unsupervised ”sleep” phase to modify the parameters of a fully connected ANN. We demonstrate a number of performance improvements over existing defense algorithms, such as finetuning or adversarial retraining and defensive distillation, on both adversarial and noise robustness. The contributions are summarized below:
• We analyze how robust the proposed sleep algorithm is to four different types of adversarial attacks on three different datasets (MNIST, CUB200, and a toy dataset). For most conditions (MNIST, toy dataset), after sleep phase was applied, the attacks consistently resulted in adversarial examples that were more distinct from the original input compared to the adversarial examples designed for the original (before sleep) network. • We illustrate that the sleep algorithm creates a more robust network whereby performance on noisy and blurred inputs is higher compared to control or defensively distilled network and is more robust to the other types of distortions compared to ANNs that are fine-tuned on a single distortion. • We analyze the impact of the sleep algorithm on task representation and demonstrate that the algorithm creates decision boundaries that more closely resemble the true classes, effectively extracting the gist of the data.
2 ADVERSARIAL ATTACKS AND DISTORTIONS
Adversarial attacks aim to create minimal perturbations that, while imperceptible to the human eye, fool ANNs. These attacks range from white-box to black-box attacks, based on how much information they assume the attacker to possess about the network. White-box attacks assume that the attacker has access to the network architecture, training data and weights. These attacks can range from absolute information, such as gradient-based attacks which compute the gradient of the loss with respect to the input (Brendel et al. (2017)), to score-based attacks which only utilize predicted scores of the model. Black-box attacks, which assume no knowledge about the network, solely rely on the decision made in order to craft adversarial examples. Attacks can be (a) targeted such that the attacker aims to create an adversarial example that the network would predict as a certain class or (b) untargeted where the attacker’s goal is simply to cause any kind of misclassification (Biggio & Roli (2018)).
In this work we consider four types of adversarial attacks ranging from white-box to black-box attacks. We assume that the attacker solely wants to cause a misclassification, with no respect to the output class. We present a brief description of each of the four attacks below (see Appendix for examples of images created by these attacks).
Fast Gradient Sign Method (FGSM). FGSM (Goodfellow et al. (2014)) computes the sign of the gradient of the loss function (J) with respect to the original input x using the weights θ of the network and the target labels y.
x′ = x+ sign(∇xJ(θ, x, y)). This represents the direction to change each pixel in the original input in order to increase the loss function. Based on the value of , the corresponding perturbation to the original image can range from small to large. Thus, in this work we use the average of the smallest values of needed to create an adversarial example x′ (misclassified input) for each input in the testing set.
DeepFool. DeepFool (Moosavi-Dezfooli et al. (2016)) is an iterative method which approximates the nearest decision boundary to the input at time t and moves the input xt in that direction to compute xt+1. This process is repeated until a misclassification is produced or the runtime of the simulation is exceeded. For this attack, we measure the L2-norm between the original input x and the adversarial input x′. Thus, successful defenses should result in a high L2-norm for this algorithm.
Jacobian-based Saliency Map (JSMA). JSMA (Papernot et al. (2016a)) aims to craft adversarial examples that minimize the L0-norm of x− x′ by reducing the number of pixels that are altered. In summary, the algorithm computes the gradient, as done in FGSM but for all possible classes. These gradient values represent how changing each pixel contributes to the overall loss function, with large values indicating a significant effect on the loss. These values are used to create a saliency map, where each pixel’s impact on the loss is modelled. The algorithm utilizes this saliency map to alter individual pixels, repeating the gradient and saliency map computation until an adversarial example is created. For this type of attack, we utilize the L2-norm to determine defense success.
Boundary Attack. The Boundary Attack (Brendel et al. (2017)) is a black-box attack which relies solely on the decision of the ANN to craft an adversarial example. Given an input x, a random input x′0 is chosen such that f(x) 6= f(x′0), where f(x) is the label produced by the ANN. In our work, x′0 is chosen from a uniform distribution. The attack starts by moving x ′ 0 toward x until it reaches the point where f(x) = f(x′0), or the decision boundary in between f(x) and f(x ′ 0). From here, the attack consists of two steps: an orthogonal perturbation and a forward perturbation. During the orthogonal perturbation, random points along the hypersphere around f(x′t) are sampled. Those that are adversarial and closer to x than before are added to the queue for forward perturbation. During the forward perturbation, a small step is taken from x′t to x as long as f(x) 6= f(x′t). This process is repeated until a convergence criterion is met. For this attack, we utilize the L2-norm to define defense success.
Distortions. Although not specifically designed to attack an ANN, distortions negatively impact ANN performance. In this work we consider two simple distortion techniques: blurring and Gaussian noise. In first case, we perform 2-D Gaussian filtering with a blur kernel of varying standard deviation in order to blur the images. In second case, we add Gaussian noise with mean 0 and standard deviation σ. These distortions are tested in the networks implementing the proposed sleep algorithm as well as using the adversarial defenses discussed below.
3 ADVERSARIAL DEFENSES
We compare our sleep algorithm with two existing adversarial defenses: defensive distillation and fine-tuning, or adversarial retraining. Defensive distillation (Papernot et al. (2016b)) utilizes two training sessions in order to create a distilled network. First, an initial network is trained on (X,Y ), where X is the training data, and Y is the one-hot encoded training labels. The activation function of this network is changed such that the softmax function of the output layer is computed using a temperature term T as follows:
F (x) = e
zi(X) T∑N−1 l=0 e zl(X) T .
A higher T forces the ANN to produce larger probability values for each class, whereas lower T values support a similar representation as the one-hot encoded labels. After the first network is trained, the output of the network (probability values) is used to train a distilled network with the same softmax-temperature function. Previous work has shown this approach can be successful at preventing some types of attacks (Papernot et al. (2016b)). However, others have shown that it is not successful at defending against modified versions of those attacks or novel attacks in general (Carlini & Wagner (2016; 2017)). Based on the previous work which found that temperature values between 20 and 100 effectively prevent adversarial attacks (Papernot et al. (2016b)), we use a temperature value of T = 50 in our implementation of defensive distillation.
Adversarial retraining aims to fine-tune the network on adversarial examples with the correct labels as a form of regularization. Previous work has shown that adversarial retraining can mitigate the effectiveness of some adversarial attacks. Goodfellow et al. (2014) showed that adversarial retraining can reduce the error rate on MNIST, demonstrating greater ANN robustness after fine-tuning. Likewise, Moosavi-Dezfooli et al. (2016) showed that fine-tuning on DeepFool attacks can reduce the effectiveness of their attacks. However, they observed that fine-tuning on FGSM attacks has negative results, actually increasing the strength of the attack. This suggests that fine-tuning may overfit the network to certain attacks, while failing to extrapolate to other attacks, similar to results shown for generalization in ANNs (Geirhos et al. (2018)). For the adversarial retraining procedure presented here, we train the network on the original input and then fine-tune the network on various adversarial attacks with a reduced learning rate.
4 SLEEP ALGORITHM
The basic intuition behind the sleep algorithm is that a period of offline activity, whereby network weights are modified according to an unsupervised learning algorithm, allows the parameters of the network to become more reflective of the underlying statistics of the task at hand, while not overfitting the statistics of the training data. The pseudocode is presented in Algorithm 1. In short, an ANN is trained using stochastic gradient descent and the standard backpropagation algorithm (exact parameters used for each of the datasets are shown in Table 2). After training, the network structure is converted into a spiking neural network (SNN). After building the SNN, we run a sleep phase which modifies the network connectivity based on spike-timing dependent plasticity (STDP). After the sleep phase, the SNN network is converted back into the ANN and testing is performed.
4.1 SPIKING NEURAL NETWORKS
SNNs seek to model closely temporal brain dynamics. In short, SNNs are composed of spiking neurons and model the information transformation and the dependence on exact timing of spikes that occurs in biological networks (Ghosh-Dastidar & Adeli (2009)). Individual neuron models can range from simple integrate-and-fire type neurons which sum their inputs and produce an output (spike) if this exceeds some firing threshold to more complex Hodgkin-Huxley type neurons which model sodium-, potassium-, and chloride-channel kinetics (Abbott & Kepler (1990)). Recent work has shown that a near loss-less conversion between ANNs and SNNs can be achieved by propagating activity through a spiking neural network for a given input and counting the number of times that each output neuron fires (Diehl et al. (2015)).
To convert an ANN to SNN (Lines 1-3 of pseudocode), we assume the ANN utilizes ReLU neurons with no bias. This assumption is made so that the output neuron’s activation can be treated as a firing rate, either zero or positive, and that the thresholds of all neurons in a given layer are of the same scale. The weights from the ANN are directly mapped to the SNN. In our analysis, each unit in the SNN is modelled as an integrate-and-fire type neuron, computing the following equation:
τm dv
dt = −v(t) + N∑ i=1 wi ∗ s(i).
Here, τm represents the decay constant of the membrane potential, v is the voltage at a given time, wi is the weight connecting from neuron i, and s(i) is the spiking activity of neuron i, either 1 or 0.
4.2 PLASTICITY AND SLEEP
The key advantage of using a SNN is that biologically inspired training rules can be applied while the network is driven by noisy input. Empirical data suggest that the brain uses spike-timing dependent plasticity (STDP) (Song et al., 2000), where weight updates depend on the relative timing of preand post-synaptic spikes. It has been shown that STDP results in balanced activity, where all neurons fire in equal proportions (Song et al. (2000)). Here we utilize a modified version of STDP: if a presynaptic spike induces a post-synaptic spike, then the weight between these neurons is increased. If a post-synaptic spike occurs, but the pre-synaptic neuron does not spike, then the corresponding weight is decreased (in this case postsynaptic spiking may occur because of spiking in other neurons connecting to that post-synaptic neuron).
The sleep training phase we propose here can be described as following. First, inputs to each neuron of the input layer must be presented as spiking activity in order to propagate activity from the input layer to the hidden layers of the network. We convert inputs (real-valued pixel intensities or features) to spikes by defining a maximum firing rate fmax with units spikessec and computing a Poisson-distributed spike raster, such that inputs with higher values (i.e. brighter pixels) will have higher rate than inputs with lower values, with no spike rates exceeding fmax. Next, activity is propagated through the network as spikes and the STDP rule is applied to update weights. In biological networks, increase of synaptic strength during slow-wave sleep leads to characteristic patterns of activity with repetitive periods of elevated firing (Up-states), when previously learned memory traces are spontaneously replayed. To simulate this dynamics, synaptic weights in SNN are up-scaled to induce high firing rates in later layers. Other important parameters include the threshold for each layer and the length of sleep. The parameters used for each dataset are presented in Table 3.
4.3 EXPERIMENTS AND DATASETS
Below, we describe the general experimental setup as well as the datasets tested. First, we trained a control ANN using the training set for each of the main datasets used in this study. Next, we created a defensively distilled network using T = 50 for the temperature parameter to create the second test network. Then, we fine-tuned the control ANN on a specific attack or distortion method to create the third test network. Finally, we converted the control ANN to an SNN and applied the sleep algorithm as described above to create the fourth test network. We created adversarial examples for each of these four networks using the attacks we described above (fine-tuned networks are tested on the attacks they were fine-tuned on). Then, we analyze how successful each attack is to fool each of the four networks using the metrics defined above. For generalization (blur and noise), we performed the same setup as above creating four different networks. We then tested each network on varying levels of distortion. We tested networks fine-tuned on blurred and noisy images to measure how performance generalizes across distortion methods. We averaged performance across a minimum of three networks for each attack and distortion.
We used three datasets to compare performance: Patches (a toy dataset created simply for analysis), MNIST (LeCun et al. (1998)), and CUB-200 (Welinder et al. (2010)). Patches consists of four binary images arranged in a 10x10 square. Each image has its own label (1-4), and consists of 25 bright pixels (value set to 1) and 75 dark pixels. The overlap of bright pixels among the four images (see Appendix] Appendix) is chosen such that the task is not trivial. The MNIST dataset consists of 70,000 28x28 greyscale images of handwritten digits, with 60,000 in the training set and 10,000 in the testing set. CUB-200 is a high resolution dataset of images of birds with 200 bird species,
Algorithm 1 Sleep: 1: procedure CONVERTANNTOSNN(nn) 2: Map the weights from (nn) with ReLU units to network of integrate-fire units (snn) 3: Apply weight normalization and return scale for each layer ([24]) return snn, scales 4: procedure CONVERTSNNTOANN(nn) 5: Directly map the weights from integrate-fire network (nn) to ReLU network (ann) return ann
6: procedure SLEEP(nn, I, scales) . I is input 7: Initialize v (voltage) = 0 vectors for all neurons 8: for t← 1 to Ts do . Ts - duration of sleep 9: S(1, t)← Convert input I to Poisson-distributed spiking activity 10: for l← 2 to n do . n - number of layers 11: v(l, t)← v(l, t− 1) + (scales(l − 1)W(l, l − 1)S(l − 1, t)) . W(l,l-1) - weights 12: S(l, t)← v(l, t) > threshold(l) . Propagate spikes
13: W(l, l − 1)← { W(l, l − 1) + inc if S(l, t) = 1&S(l − 1, t) = 1 W(l, l − 1)− dec if S(l, t) = 1&S(l − 1, t) = 0 . STDP
14: procedure MAIN 15: Initialize neural network (ann) with ReLU neurons and bias = 0. 16: Train ann using backpropagation. 17: snn, scales = ConvertANNtoSNN(ann) 18: snn = Sleep(snn, Training data X , scales) 19: ann = ConvertSNNtoANN(snn)
with very few ( 30) images per class. For this dataset, we used previously extracted ResNet-50 embeddings, where ResNet-50 was pre-trained on ImageNet (He et al. (2016)). For CUB-200, we do not report results for blurring, since we are using extracted features, not images.
5 RESULTS
We evaluate the sleep algorithm in two settings: (1) Adversarial attacks designed to fool neural networks and (2) generalization distortions designed to reflect imperfect viewing conditions or other types of noise. For adversarial attacks (other than FGSM), we utilize the following metric to evaluate the success of each defense. Let x′i be the adversarial example created for input xi. The total score SA for an attack is the median squared L2-distance for all samples, where N is the dimension of the space:
SA = median( 1
N ‖x′i − xi‖ 2 2).
For FGSM, we define the following metric which computes the median minimum noise level needed to produce a misclassification across all samples:
SFGSM = median(min( i)) s.t. f(xi + i ∗ x′i) 6= f(x).
For MNIST and CUB-200, we evaluate the attacks on all examples in the testing set. Examples that the networks get wrong before the attack was implemented are discarded from the analysis (in these cases ‖x′i − x‖ 2 k = 0 and i = 0 for all attacks). For FGSM and distortions, we also include plots of classification accuracy as a function of noise level. For DeepFool and JSMA, we report adversarial attack accuracy (number of examples where f(x) = y and f(x′) 6= f(x), where y is the correct label, over number of examples tested). Note that these algorithms would always produce an adversarial example if allowed to run forever. However, due to computational limitations, we included a run-time limits on the number of iterations for these algorithms (see Appendix). Thus, a lower adversarial attack accuracy indicates that the attack would need more iterations to run to reach 100% accuracy. This is a similar measure as distance since more iterations would result in more distinct adversaries for all attacks implemented and the updates at each iteration have the same magnitude for each defense.
A B C
Patches MNIST CUB-200
5.1 ADVERSARIAL ATTACKS
Here we report the scores for all different attacks and for the all datasets. For the FGSM attack, the sleep algorithm increases the median minimum noise needed for misclassification for all three datasets compared to the control network (also see Figure 1). For MNIST dataset, the amount of noise needed to fool the network after the sleep algorithm was almost double of that needed for either the fine-tuning or defensive distillation approaches. For the Patches dataset, both defensive distillation and fine-tuning increase the robustness of the network. However, on CUB-200, only fine-tuning and sleep were able to defend, albeit marginally, against the FGSM attack. Looking at the classification accuracy of the network as a function of noise added ( , Figure 1), we observe that
in the Patches and CUB-200 dataset, sleep tends to have higher classification accuracy than the other methods for epsilon < 0.1. After this point, sleep tends to have equal classification accuracies as compared to the other methods. For MNIST, the baseline classification accuracy on the original test set decreases slightly compared to the other methods (80% after sleep). However, the performance remains high longer than for the other defense methods on images that were correctly classified. We observed that performance continued to drop after a sufficiently large amount of noise was added. This is biologically plausible as adding more noise to an image should result in image degradation and misclassifications. In sum, these results indicate that a sleep phase can successfully mitigate FGSM, more so than a control network.
For DeepFool, sleep has a significant effect on the defense score on the MNIST dataset, both reducing the attack success rate and increasing the distance between the adversarial example and the original input by an order of magnitude. For Patches and CUB-200 this effect is less pronounced, with fine-tuning or the control network performing better. We hypothesize that sleep was ineffective in preventing the DeepFool attack in tasks with very few exemplars per class (Patches) or a large number of classes (CUB-200). In CUB-200, there is a large number of classes so the distance between the input and the nearest decision boundary is smaller (this is supported by the fact that JSMA, an L0 attack, does worse than DeepFool for CUB-200 and vice versa for MNIST, control networks). In this case, sleep is unable to move the decision boundary of one class without impinging on the decision space of another class. In MNIST, where the decision space for one class is presumably larger, sleep can alter decision boundaries in a way that has a minimal effect on other classes.
Sleep successfully increases the network’s robustness to the JSMA attacks on MNIST and Patches, reducing the attack success rate in the case of MNIST and increasing the distance needed to create an adversary for Patches. On CUB-200, there is a marginal reduction in the adversarial attack accuracy compared to the control network. Defensive distillation and fine-tuning also reduce JSMA’s effectiveness. However, for these two defenses, in the case of MNIST, the networks were capable of finding an adversary for a higher percentage of the testing set. Thus, the effect of changing a small number of important pixels is mitigated after running the sleep algorithm.
For the Boundary Attack, we found that no defense mechanism helps compared to the control in decreasing the attack’s effectiveness on the MNIST dataset. However, for CUB-200 and Patches, the sleep algorithm results in a higher defense score than that for the control network. This lends support to the idea that sleep favorably alters decision boundaries so that it becomes harder to find an adversarial example that is close to the original image after the sleep phase. This also suggests that sleep is not simply obfuscating gradients, which has been a common criticism of several adversarial defenses (Athalye et al. (2018)), which are tested on white-box attacks. In fact, given the long run-time for convergence of this algorithm, if we define a threshold for adversarial attack success (L2-norm > 1), then sleep successfully defends against this attack on the MNIST dataset (see Table 4).
Why does sleep phase help? It has been shown that sleep tends to promote an increase in stronger weights while pruning weaker weights, thus increasing the width of the weights’ distribution (Gonzalez et al., 2019). This results in the consolidation of strong memories at the cost of diminishing weak memories. From this point of view, a memory is a subspace or abstraction in the decision space corresponding to a given class. Sleep may result in enlarging the subspace the network allocates to a stronger category while shrinking weaker ones (Figure 5A). The process of strengthening the strong memory also results in making it robust and noise invariant, as seen in Figure 5B where the first 8 categories (numbers 0-7) are strengthened and become more invariant to the FGSM attack, while the last two digits are essentially forgotten and the network cannot confidently predict exemplars from these classes (Figure 5C). If the noise is less targeted, as in the case of random noise or blurring, sleep does not need to alter the decision space as much to produce better generalization and can maintain a high baseline accuracy, as we demonstrate in the next section.
5.2 GENERALIZATION
Figure 2 shows the network performance for noisy and blurry distortions of data for MNIST (A) as well as noisy distortions for the CUB-200 feature embeddings (B, see Figure 3 for results on Patches). Overall, fine-tuning on an image distortion results in the best performance for that specific distortion. However, as was noted (Geirhos et al. (2018)), fine-tuning on a specific distortion does
MNISTA B
CUB-200
not extend to other types of distortions. In our analysis, fine-tuning the network on blurred MNIST images results in high performance (> 80%) on blurred images. However, for noisy images, this performance was only marginally above the control network. The sleep algorithm increased performance for both distortion methods, since this approach is not tailored to any one representation of the training set.
Finally we tested how sleep increases robustness on blur and noise distortions. In biological systems, sleep increases generalization through replay of memories learned during awake which leads to changes in synaptic weights. These changes entail both an increase in synaptic weights associated with a specific task and pruning of synapses involved in other tasks (Gonzalez et al., 2019; Tononi & Cirelli, 2006). Figures 9 and 10 show that correlations among like digits in the hidden layers of the network are greater after applying sleep than before for noisy and blurred images. Likewise, pairs of different digits usually become decorrelated after sleep, suggesting synaptic pruning. We also show that both normalized spiking activity and activations of digit-specific neurons are higher after sleep than before (Figures 11 and 12, see Appendix for details). These results suggest that the sleep algorithm increases robustness through biologically plausible learning mechanisms involving replay of relevant activity during sleep phase.
6 CONCLUSIONS AND FUTURE DIRECTIONS
In this work, we show that a biologically inspired sleep algorithm can increase an ANN’s robustness to both adversarial attacks and general image distortions. The algorithm augments the normal (e.g., back-propagation based) training phase of an ANN with an unsupervised learning phase in the equivalent SNN modelled after how the biological brain utilises sleep to improve learning. We hypothesize that the unsupervised sleep phase creates more natural feature representations which in turn lead to more natural decision boundaries, thus increasing the robustness of the network. Although this robustness may come at a cost of overall accuracy, it has been shown that robustness may have multiple important benefits, such as more salient feature representations as well as invariance to input modifications (Tsipras et al. (2018)). We also show that the trade-off between robustness and accuracy does not always occur, particularly for image distortions such as noise or blur. Future work includes converting the sleep algorithm into a regularization technique to be applied in more
standardized machine learning frameworks as well as understanding the theoretical basis for the beneficial role of spike based plasticity rules in increasing network robustness.
7 ACKNOWLEDGEMENTS
This work was supported by the Lifelong Learning Machines program from DARPA/MTO (HR0011-18-2-0021) and ONR (MURI: N00014-16-1-2829).
8 APPENDIX
8.1 TRAINING PARAMETERS
Here, we define the neural network parameters used for each of the three datasets as well as the sleep, defensive distillation, and fine-tuning parameters. Table 2 shows parameters used to train each of the control networks discussed in the paper. All neural networks were trained with ReLU neurons. Table 3 shows the parameters used during sleep for each of the three datasets. Note that these parameters for MNIST and CUB-200 were chosen by running a genetic algorithm to maximimize performance on the FGSM attack (performance was determined based on the training set so as not to overfit to the test set). For the other three attacks, parameters that maximized FGSM performance were used. Also, for noise and blur generalization, different parameters were chosen (not shown here).
For the defensively distilled networks tested in the paper, we first train an initial network using a temperature of 50. Then, we use the training set to compute soft labels and finetune the initial network on these soft labels for the same number of epochs and with the same learning rate.
For the fine-tuned networks, we take the control networks trained with the parameters shown in Table 2. The learning rate is reduced to 0.05 and the network is fine-tuned on a mixture of either adversarial attacks, blur or noise and the original images/features. For CUB-200, we perform finetuning for 10 epochs.
8.2 PATCHES ANALYSIS
The Patches dataset represents an easily interpretable example where we can understand what happens to the weights after sleep. Figure 3A shows an example of the dataset. Here, we have 4 images each belonging to 4 different classes. 25 pixels are whitened in each image and the remaining 75 pixels are dark. There is a 15 pixel overlap, so that weights connecting from input to output layer must take this into account in order to separate the images. Figure 3B illustrate the blur and noise distortions tested for this dataset and Figure 3C shows the results for the blur and noise distortions.
After the network is trained, we can analyze the weights connecting from each of the 100 input neurons to the 4 output neurons (see Figure 4, top row). We theorize that optimally robust behavior would occur when weights connecting from ON-pixels are positive, weights connecting from
overlapping pixels are near 0, and weights connecting from OFF-pixels are negative. In this case, changing the value of overlapping pixels will have no effect on classification. Changing the value of OFF-pixels will cause the network to predict another class, where OFF-pixels may be ON-pixels or indicative of that class. Changing the value of ON-pixels will only have a negative impact if the brightness of the pixel is reduced significantly. Thus, in this circumstance, the network should behave robustly.
In the control network, we observe that weights connecting from ON-pixels (pixel-value = 1) increase while weights connecting from OFF-pixels remain at 0. Weights connecting from overlapping pixels remain near 0 or positive. Defensive distillation causes some weights connecting from overlapping pixels to decrease, likely because the soft labels used in defensive distillation cause overlapping pixel units to alter the probability values computed by the network in such a way that does not truly reflect the impact of the overlapping pixels. In the fine-tuned networks (both on blurred images and noisy images), we observe an increase in ON-pixel weights and an increase in noisiness of OFF-pixel weights. Likewise, in the sleep network, OFF-pixel weights become negative while ON-pixel weights remain the same. In these cases, robustness is increased as weights become more similar to our hypothesized ideal weights. Essentially, the magnitude of input changes need to change classification increase since the spread between ON-pixel weights and OFF-pixel weights increases. We quantify the spread in weights by taking the difference between the average weight connecting from ON-pixels and the average weight connecting from OFF-pixels. This represents the mean input that each correct output neuron receives. This result is shown in Figure 3D. Of note is that this weight spread is increased for both the sleep and finetuning-noise network, suggesting that these defenses bring the weights closer to their ideal values for computing robustness.
8.3 ADVERSARIAL ATTACKS
Here, we describe the general approach for implementing DeepFool, JSMA, and the Boundary Attack discussed in the paper. We also show examples of adversaries created for each of the defense networks from these attacks.
A
Boundary Attack
JSMA
DeepFool
C
B
Control
Def. Dist.
Finetuning
Sleep
Control
Def. Dist.
Finetuning
Sleep
Control
Def. Dist.
Finetuning
Sleep
Figure 6: A) DeepFool adversarial examples for each defense. The network’s prediction is shown above each image. B) JSMA. C) Boundary Attack.
DeepFool. DeepFool (Moosavi-Dezfooli et al. (2016)), as mentioned above, is an iterative algorithm that, at each iteration, aims to move the adversarial example in the direction of the closest decision boundary until it results in a misclassification. We based our implementation of that in Rauber et al. (2017). We stopped running the algorithm when either an adversarial example is found or when 100 iterations have passed. Examples of DeepFool attacks on the MNIST dataset are shown in Figure 6A. At each iteration we compute a linear approximation of the loss function and take a step in the
direction that would be result in a misclassification. The equations used and pseudocode can be found in the original DeepFool paper.
JSMA. JSMA is also an iterative algorithm which computes the pixel that would change the loss function the most at each algorithm and changes this pixel, until a misclassification is produced. For this method, we set a run-time limit of 500 iterations. We also remove a pixel from the saliency map when it has beeen updated seven times, so the algorithm can focus on other pixels. We set the change to each pixel at a constant value, 0.1. This represents how much each pixel is updated (in the direction that results in a misclassification) at each iteration. Pseudocode can be found in the original publication (Papernot et al. (2016a)). We show examples of adversaries created by JSMA in Figure 6B.
Boundary Attack. The Boundary Attack (Brendel et al. (2017)) starts with an adversarial example and moves it closer to the decision boundary of the correct class. At each step of the algorithm, the method performs orthogonal and forward perturbations to move the adversary closer to the original image, thus reducing the distance between the adversary and the original image. We set both a distance convergence criterion (L2-norm = 1e-7) and a run-time limitation on the attack (1000 iterations). Example attacks are shown in Figure 6C. We note that sometimes the algorithm does not successfully produce an ”imperceptible” adversarial example and instead produces a noisy output (the starting condition is a noisy image). If we define a threshold defining a successful adversarial attack (L2-Norm > 1), then we observe the results for MNIST in Table 4.
8.4 GENERALIZATION ANALYSIS
In this section, we analyze how sleep can aid in increasing ANN robustness. In biological networks, sleep extracts the gist of a task through replay (Lewis & Durrant (2011)). We hypothesized that our sleep algorithm works in the same manner. First, we tested the ability of the sleep network to decorrelate distinct inputs by analyzing the effect of running sleep and testing on our two distortion techniques (see Figure 7).
We computed the correlations of network activities in each of the hidden layers of the network before and after implementing our defense methods. For each pair of digits, we computed the average correlation of layer activities in the undistorted (Figure 8), noisy (Figure 9), and blurred (Figure 10) conditions. Each figure reports the difference in digit pairwise correlations between
the defense method and the control network for each set of inputs. For our sleep network, it is apparent that in layer 2 and layer 3, the correlations of the same digits (the diagonal) increases after sleep. Additionally, the correlation of distinct digits typically experiences negative change, representing decorrelation of distinct inputs. This analysis holds for defensive distillation and both of the fine-tuned networks. This suggests that the ANN representation of different exemplars of the same digit becomes more similar after sleep or after any of the defense networks when compared to the control. This is not simply due to an increased overlap of all inputs, since exemplars of different digits become decorrelated after applying a defense method.
Next, we performed the same correlation analysis on noisy and blurred images to see how the representation of distorted images changes after applying a distortion method. First, we note that finetuning on noisy images results in stronger correlation of the same (noisy) digit but weaker correlations of different (noisy) digits, as noted above. However, fine-tuning on blurred images does not have as strong an effect. Second, sleep seems to have a beneficial effect on the correlation matrices for both blurred and noisy images (comparing the right column of Figures 9 and 10). This illustrates the beneficial role of sleep in creating distinct representations of digits, where different neuronal ensembles encode different digits. This change in representation should result in increased robustness since changes to the input must be larger in order to recruit neuronal ensembles that represent other digits.
On top of decorrelating the representation of distinct memories by pruning synapses, biophysical modelling suggests that sleep can also aid in strengthening connections thus making stronger the response of primary neurons involved in memory recall (Gonzalez et al. (2019)). To test this hypothesis in our networks, we analyzed the firing rate and activations of digit-specific neurons before and after sleep. Before describing the analysis, we would like to note that SNNs can be used to performn classification and a near loss-less conversion between ANNs and SNNs has been achieved on the MNIST task (Diehl et al. (2015)). To perform classification, a digit is presented (as a Poisson spike train) to the network and spikes are propagated throughout the network for a given time period (or number of presentations of the input). Analyzing network activity in the spiking domain can be easier than in the activation domain (ANNs) since spikes are oftentimes easier to interpret than neuronal activations.
For this reason, we first analyze how spike rates of digit-specific neurons change before and after sleep in the spike domain. To do this we present all images of a specified digit to the spiking network and count the number of spikes from each neuron (holding the weights constant). We define digitspecificity by looking at the 100 neurons with the highest firing rates in layer 2. In Figure 11, we show that the normalized firing rate of these neurons usually increases after sleep (normalized by dividing by the maximum firing rate observed from the SNN).
Next, we perform the same analysis in the activation domain. Again, we define digit-specific neurons by looking at the top 100 neurons with the highest activation for a specific digit. We look at the normalized mean activations of these neurons before and after sleep and note that for all digits this value is higher after sleep than before sleep (Figure 12). This suggests that the neurons in the network are responding more strongly to the presentation of the same digit, thus increasing the robustness of the network as more noise must be added in order to counter the effect of this stronger response. This also suggests that our algorithm works in a biologically plausible way: both by decorrelating distinct inputs and increasing the strength of similar inputs.
SleepDefensive Distillation Finetune Noise Finetune Blur | 1. What is the novel concept proposed by the paper in ANN training?
2. How does the method compare to prior works in terms of its effectiveness?
3. Are there any concerns regarding the choice of algorithm and pseudocode used?
4. How was the optimal sleep duration determined for each experiment?
5. Would combining this approach with existing methods enhance its performance? | Review | Review
The paper proposes an ANN training method for improving adversarial robustness and generalization, inspired by biological sleep.
I'm leaning towards accepting as it seems to be an original concept and has fairly extensive empirical results that are somewhat promising.
The idea of a sleep phase as an alternative to explicit adversarial or generalization training is interesting. The results suggest that the approach works reasonably well in many cases.
Suggestions for improvement / clarification:
- The mapping from biological sleep to the actual algorithm + pseudocode used could benefit from more thorough explanation. It is not clear which choices are arbitrary vs well-principled.
- Was the optimal sleep duration determined empirically for each experiment?
- I agree with the authors' proposed future work of better understanding and standardizing this approach.
- Consider combining this approach with the existing adversarial or generalizing approaches (instead of as an alternative). Do they complement each other? |
ICLR | Title
PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Abstract
Graph Convolutional Networks (GCNs) is the state-of-the-art method for learning graph-structured data, and training large-scale GCNs requires distributed training across multiple accelerators such that each accelerator is able to hold a partitioned subgraph. However, distributed GCN training incurs prohibitive overhead of communicating node features and feature gradients among partitions for every GCN layer during each training iteration, limiting the achievable training efficiency and model scalability. To this end, we propose PipeGCN, a simple yet effective scheme that hides the communication overhead by pipelining inter-partition communication with intra-partition computation. It is non-trivial to pipeline for efficient GCN training, as communicated node features/gradients will become stale and thus can harm the convergence, negating the pipeline benefit. Notably, little is known regarding the convergence rate of GCN training with both stale features and stale feature gradients. This work not only provides a theoretical convergence analysis but also finds the convergence rate of PipeGCN to be close to that of the vanilla distributed GCN training without any staleness. Furthermore, we develop a smoothing method to further improve PipeGCN’s convergence. Extensive experiments show that PipeGCN can largely boost the training throughput (1.7×∼28.5×) while achieving the same accuracy as its vanilla counterpart and existing full-graph training methods. The code is available at https://github.com/RICE-EIC/PipeGCN.
1 INTRODUCTION
Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016) have gained great popularity recently as they demonstrated the state-of-the-art (SOTA) performance in learning graph-structured data (Zhang & Chen, 2018; Xu et al., 2018; Ying et al., 2018). Their promising performance is resulting from their ability to capture diverse neighborhood connectivity. In particular, a GCN aggregates all features from the neighbor node set for a given node, the feature of which is then updated via a multi-layer perceptron. Such a two-step process (neighbor aggregation and node update) empowers GCNs to better learn graph structures. Despite their promising performance, training GCNs at scale is still a challenging problem, as a prohibitive amount of compute and memory resources are required to train a real-world large-scale graph, let alone exploring deeper and more advanced models. To overcome this challenge, various sampling-based methods have been proposed to reduce the resource requirement at a cost of incurring feature approximation errors. A straightforward instance is to create mini-batches by sampling neighbors (e.g., GraphSAGE (Hamilton et al., 2017) and VR-GCN (Chen et al., 2018)) or to extract subgraphs as training samples (e.g., Cluster-GCN (Chiang et al., 2019) and GraphSAINT (Zeng et al., 2020)).
In addition to sampling-based methods, distributed GCN training has emerged as a promising alternative, as it enables large full-graph training of GCNs across multiple accelerators such as GPUs.
This approach first partitions a giant graph into multiple small subgraps, each of which is able to fit into a single GPU, and then train these partitioned subgraphs locally on GPUs together with indispensable communication across partitions. Following this direction, several recent works (Ma et al., 2019; Jia et al., 2020; Tripathy et al., 2020; Thorpe et al., 2021; Wan et al., 2022) have been proposed and verified the great potential of distributed GCN training. P 3 (Gandhi & Iyer, 2021) follows another direction that splits the data along the feature dimension and leverages intra-layer model parallelism for training, which shows superior performance on small models.
In this work, we propose a new method for distributed GCN training, PipeGCN, which targets achieving a full-graph accuracy with boosted training efficiency. Our main contributions are following:
• We first analyze two efficiency bottlenecks in distributed GCN training: the required significant communication overhead and frequent synchronization, and then propose a simple yet effective technique called PipeGCN to address both aforementioned bottlenecks by pipelining inter-partition communication with intra-partition computation to hide the communication overhead.
• We address the challenge raised by PipeGCN, i.e., the resulting staleness in communicated features and feature gradients (neither weights nor weight gradients), by providing a theoretical convergence analysis and showing that PipeGCN’s convergence rate is O(T− 23 ), i.e., close to vanilla distributed GCN training without staleness. To the best of our knowledge, we are the first to provide a theoretical convergence proof of GCN training with both stale feature and stale feature gradients.
• We further propose a low-overhead smoothing method to further improve PipeGCN’s convergence by reducing the error incurred by the staleness.
• Extensive empirical and ablation studies consistently validate the advantages of PipeGCN over both vanilla distributed GCN training and those SOTA full-graph training methods (e.g., boosting the training throughput by 1.7×∼28.5× while achieving the same or a better accuracy).
2 BACKGROUND AND RELATED WORKS
Graph Convolutional Networks. GCNs represent each node in a graph as a feature (embedding) vector and learn the feature vector via a two-step process (neighbor aggregation and then node update) for each layer, which can be mathematically described as:
z(`)v = ζ (`) ({ h(`−1)u | u ∈ N (v) }) (1)
h(`)v = φ (`) ( z(`)v , h (`−1) v ) (2)
where N (v) is the neighbor set of node v in the graph, h(`)v represents the learned embedding vector of node v at the `-th layer, z(`)v is an intermediate aggregated feature calculated by an aggregation function ζ(`), and φ(`) is the function for updating the feature of node v. The original GCN (Kipf & Welling, 2016) uses a weighted average aggregator for ζ(`) and the update function φ(`) is a single-layer perceptron σ(W (`)z(`)v ) where σ(·) is a non-linear activation function and W (`) is a weight matrix. Another famous GCN instance is GraphSAGE (Hamilton et al., 2017) in which φ(`) is σ ( W (`) · CONCAT ( z (`) v , h (`−1) v )) .
Distributed Training for GCNs. A real-world graph can contain millions of nodes and billions of edges (Hu et al., 2020), for which a feasible training approach is to partition it into small subgraphs (to fit into each GPU’s resource), and train them in parallel, during which necessary communication is performed to exchange boundary node features and gradients to satisfy GCNs’s neighbor aggregation (Equ. 1). Such an approach is called vanilla partition-parallel training and is illustrated in Fig. 1 (a). Following this approach, several works have been proposed recently. NeuGraph (Ma et al., 2019), AliGraph (Zhu et al., 2019), and ROC (Jia et al., 2020) perform such partition-parallel training but rely on CPUs for storage for all partitions and repeated swapping of a partial partition to GPUs. Inevitably, prohibitive CPU-GPU swaps are incurred, plaguing the achievable training efficiency. CAGNET (Tripathy et al., 2020) is different in that it splits each node feature vector into tiny sub-vectors which are then broadcasted and computed sequentially, thus requiring redundant communication and frequent synchronization. Furthermore, P 3 (Gandhi & Iyer, 2021) proposes to split both the feature and the GCN layer for mitigating the communication overhead, but it makes a strong assumption that the hidden dimensions of a GCN should be considerably smaller than that of
(a) Vanilla partition-parallel training
2 5
1
6
3
4
Graph
Inner Node Boundary Node
Communicate Boundary Feature & Grad
Pipeline
Communicate Compute ...
... ... Iteration #2Iteration #1
52 61
Part 3 ...
Part 1 ...
4
2
5
1
6 4
Communicate Compute ...
Iteration #2
52 61
...
...
... ...Communicate Compute Compute
Communicate
(b) Timeline of vanilla partition-parallel training (c) PipeGCN
2
6
5
3 Part 2
Iteration #1
Timeline of (a)
3 4 52 3 4 52
... ...
... ...
Timeline of PipeGCN
Part 1 Part 2 Part 3 Partition
1 2
6 5
3
4
Figure 1: An illustrative comparison between vanilla partition-parallel training and PipeGCN.
input features, which restricts the model size. A concurrent work Dorylus (Thorpe et al., 2021) adopts a fine-grained pipeline along each compute operation in GCN training and supports asynchronous usage of stale features. Nevertheless, the resulting staleness of feature gradients is neither analyzed nor considered for convergence proof, let alone error reduction methods for the incurred staleness.
Asynchronous Distributed Training. Many prior works have been proposed for asynchronous distributed training of DNNs. Most works (e.g., Hogwild! (Niu et al., 2011), SSP (Ho et al., 2013), and MXNet (Li et al., 2014)) rely on a parameter server with multiple workers running asynchronously to hide communication overhead of weights/(weight gradients) among each other, at a cost of using stale weight gradients from previous iterations. Other works like Pipe-SGD (Li et al.,
2018b) pipeline such communication with local computation of each worker. Another direction is to partition a large model along its layers across multiple GPUs and then stream in small data batches through the layer pipeline, e.g., PipeDream (Harlap et al., 2018) and PipeMare (Yang et al., 2021). Nonetheless, all these works aim at large models with small data, where communication overhead of model weights/weight gradients are substantial but data feature communications are marginal (if not none), thus not well suited for GCNs. More importantly, they focus on convergence with stale weight gradients of models, rather than stale features/feature gradients incurred in GCN training. Tab. 1 summarizes the differences. In a nutshell, little effort has been made to study asynchronous or pipelined distributed training of GCNs, where feature communication plays the major role, let alone the corresponding theoretical convergence proofs.
GCNs with Stale Features/Feature Gradients. Several recent works have been proposed to adopt either stale features (Chen et al., 2018; Cong et al., 2020) or feature gradients (Cong et al., 2021) in single-GPU training of GCNs. Nevertheless, their convergence analysis considers only one of two kinds of staleness and derives a convergence rate of O(T− 12 ) for pure sampling-based methods. This is, however, limited in distributed GCN training as its convergence is simultaneously affected by both kinds of staleness. PipeGCN proves such convergence with both stale features and feature gradients and offers a better rate of O(T− 23 ). Furthermore, none of previous works has studied the errors incurred by staleness which harms the convergence speed, while PipeGCN develops a low-overhead smoothing method to reduce such errors.
3 THE PROPOSED PIPEGCN FRAMEWORK
Overview. To enable efficient distributed GCN training, we first identify the two bottlenecks associated with vanilla partition-parallel training: substantial communication overhead and frequently synchronized communication (see Fig. 1(b)), and then address them directly by proposing a novel strategy, PipeGCN, which pipelines the communication and computation stages across two adjacent iterations in each partition of distributed GCN training for breaking the synchrony and then hiding the communication latency (see Fig. 1(c)). It is non-trivial to achieve efficient GCN training with such a pipeline method, as staleness is incurred in communicated features/feature gradients and
more importantly little effort has been made to study the convergence guarantee of GCN training using stale feature gradients. This work takes an initial effort to prove both the theoretical and empirical convergence of such a pipelined GCN training method, and for the first time shows its convergence rate to be close to that of vanilla GCN training without staleness. Furthermore, we propose a low-overhead smoothing method to reduce the errors due to stale features/feature gradients for further improving the convergence.
3.1 BOTTLENECKS IN VANILLA PARTITION-PARALLEL TRAINING
Significant communication overhead. Fig. 1(a) illustrates vanilla partition-parallel training, where each partition holds inner nodes that come from the original graph and boundary nodes that come from other subgraphs. These boundary nodes are demanded by the neighbor aggregation of GCNs across neighbor partitions, e.g., in Fig. 1(a) node-5 needs nodes-[3,4,6] from other partitions for calculating Equ. 1. Therefore, it is the features/gradients of boundary nodes that dominate the communication overhead in distributed GCN training. Note that the amount of boundary nodes can be excessive and far exceeds the inner nodes, as the
boundary nodes are replicated across partitions and scale with the number of partitions. Besides the sheer size, communication of boundary nodes occurs for (1) each layer and (2) both forward and backward passes, making communication overhead substantial. We evaluate such overhead1 in Tab. 2 and find communication to be dominant, which is consistent with CAGNET (Tripathy et al., 2020).
Frequently synchronized communication. The aforementioned communication of boundary nodes must be finished before calculating Equ. 1 and Equ. 2, which inevitably forces synchronization between communication and computation and requires a fully sequential execution (see Fig. 1(b)). Thus, for most of training time, each partition is waiting for dominant features/gradients communication to finish before the actual compute, repeated for each layer and for both forward and backward passes.
3.2 THE PROPOSED PIPEGCN METHOD
Fig. 1(c) illustrates the high-level overview of PipeGCN, which pipelines the communicate and compute stages spanning two iterations for each GCN layer. Fig. 2 further provides the detailed end-to-end flow, where PipeGCN removes the heavy communication overhead in the vanilla approach by breaking the synchronization between communicate and compute and hiding communicate with compute of each GCN layer. This is achieved by deferring the communicate to next iteration’s compute (instead of serving the current iteration) such that compute and communicate can run in
1The detailed setting can be found in Sec. 4.
Algorithm 1: Training a GCN with PipeGCN (per-partition view). Input: partition id i, partition count n, graph partition Gi, propagation matrix Pi, node feature Xi, label Yi, boundary node set Bi, layer count L, learning rate η, initial model W0 Output: trained model WT after T iterations
1 Vi ← {node v ∈ Gi : v /∈ Bi} . create inner node set 2 Broadcast Bi and Receive [B1, · · · ,Bn] 3 [Si,1, · · · ,Si,n]← [B1 ∩ Vi, · · · ,Bn ∩ Vi] 4 Broadcast Vi and Receive [V1, · · · ,Vn] 5 [S1,i, · · · ,Sn,i]← [Bi ∩ V1, · · · ,Bi ∩ Vn]
6 H(0) ← [ Xi 0 ] . initialize node feature, set boundary feature as 0
7 for t := 1→ T do 8 for ` := 1→ L do . forward pass 9 if t > 1 then 10 wait until thread(`)f completes 11 [H (`−1) S1,i , · · · , H (`−1) Sn,i ]← [B (`) 1 , · · · , B (`) n ] . update boundary feature 12 end 13 with thread(`)f . communicate boundary features in parallel 14 Send [H(`−1)Si,1 , · · · , H (`−1) Si,n ] to partition [1, · · · , n] and Receive [B (`) 1 , · · · , B (`) n ] 15 H (`) Vi ← σ(PiH (`−1)W (`) t−1) . update inner nodes feature 16 end
17 J (L) Vi ←
∂Loss(H (L) Vi ,Yi)
∂H (L) Vi
18 for ` := L→ 1 do . backward pass 19 G
(`) i ← [ PiH (`−1) ]> ( J (`) Vi ◦ σ ′(PiH (`−1)W (`) t−1) ) . calculate weight gradient
20 if ` > 1 then 21 J(`−1) ← P>i ( J (`) Vi ◦ σ ′(PiH (`−1)W (`) t−1) ) [W (`) t−1] > . calculate feature gradient 22 if t > 1 then 23 wait until thread(`)b completes 24 for j := 1→ n do 25 J
(`−1) Si,j ← J (`−1) Si,j + C (`) j . accumulate feature gradient
26 end 27 end 28 with thread(`)b . communicate boundary feature gradient in parallel 29 Send [J(`−1)S1,i , · · · , J (`−1) Sn,i ] to partition [1, · · · , n] and Receive [C (`) 1 , · · · , C (`) n ] 30 end 31 end 32 G← AllReduce(Gi) . synchronize model gradient 33 Wt ←Wt−1 − η ·G . update model 34 end 35 return WT
parallel. Inevitably, staleness is introduced in the deferred communication and results in a mixture usage of fresh inner features/gradients and staled boundary features/gradients.
Analytically, PipeGCN is achieved by modifying Equ. 1. For instance, when using a mean aggregator, Equ. 1 and its corresponding backward formulation in PipeGCN become:
z(t,`)v = MEAN ( {h(t,`−1)u | u ∈ N (v) \ B(v)} ∪ {h(t−1,`−1)u | u ∈ B(v)} ) (3)
δ (t,`) hu
= ∑
v:u∈N (v)\B(v)
1
dv · δ(t,`+1)zv + ∑ v:u∈B(v) 1 dv · δ(t−1,`+1)zv (4)
where B(v) is node v’s boundary node set, dv denotes node v’s degree, and δ(t,`)hu and δ (t,`) zv represent the gradient approximation of hu and zv at layer ` and iteration t, respectively. Lastly, the implementation of PipeGCN are outlined in Alg. 1.
3.3 PIPEGCN’S CONVERGENCE GUARANTEE
As PipeGCN adopts a mixture usage of fresh inner features/gradients and staled boundary features/gradients, its convergence rate is still unknown. We have proved the convergence of PipeGCN and present the convergence property in the following theorem. Theorem 3.1 (Convergence of PipeGCN, informal version). There exists a constant E such that for any arbitrarily small constant ε > 0, we can choose a learning rate η = √ ε E and number of training iterations T = (L(θ(1))− L(θ∗))Eε− 32 such that:
1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ O(ε)
where L(·) is the loss function, θ(t) and θ∗ represent the parameter vector at iteration t and the optimal parameter respectively.
Therefore, the convergence rate of PipeGCN is O(T− 23 ), which is better than sampling-based method (O(T− 12 )) (Chen et al., 2018; Cong et al., 2021) and close to full-graph training (O(T−1)). The formal version of the theorem and our detailed proof can be founded in Appendix A.
3.4 THE PROPOSED SMOOTHING METHOD
To further improve the convergence of PipeGCN, we propose a smoothing method to reduce errors incurred by stale features/feature gradients at a minimal overhead. Here we present the smoothing of feature gradients, and the same formulation also applies to features. To improve the approximate gradients for each feature, fluctuations in feature gradients between adjacent iterations should be reduced. Therefore, we apply a light-weight moving average to the feature gradients of each boundary node v as follow:
δ̂(t,`)zv = γδ̂ (t−1,`) zv + (1− γ)δ (t,`) zv
where δ̂(t,`)zv is the smoothed feature gradient at layer ` and iteration t, and γ is the decay rate. When integrating this smoothed feature gradient method into the backward pass, Equ. 4 can be rewritten as:
δ̂ (t,`) hu
= ∑
v:u∈N (v)\B(v)
1
dv · δ(t,`+1)zv + ∑ v:u∈B(v) 1 dv · δ̂(t−1,`+1)zv
Note that the smoothing of stale features and gradients can be independently applied to PipeGCN.
4 EXPERIMENT RESULTS
We evaluate PipeGCN on four large-scale datasets, Reddit (Hamilton et al., 2017), ogbn-products (Hu et al., 2020), Yelp (Zeng et al., 2020), and ogbn-papers100M (Hu et al., 2020). More details are provided in Tab. 3. To ensure robustness and reproducibility, we fix (i.e., do not tune) the hyperparameters and settings for PipeGCN and its variants throughout all experiments. To implement partition parallelism (for both vanilla distributed GCN training and PipeGCN), the widely used METIS (Karypis & Kumar, 1998) partition algorithm is adopted for graph partition with its objective set to minimize the communication volume. We implement PipeGCN in PyTorch (Paszke et al., 2019) and DGL (Wang et al., 2019). Experiments are conducted on a machine with 10 RTX-2080Ti (11GB), Xeon 6230R@2.10GHz (187GB), and PCIe3x16 connecting CPU-GPU and GPU-GPU. Only for ogbn-papers100M, we use 4 compute nodes (each contains 8 MI60 GPUs, an AMD EPYC 7642 CPU, and 48 lane PCI 3.0 connecting CPU-GPU and GPU-GPU) networked with 10Gbps Ethernet. To support full-graph GCN training with the model sizes in Tab. 3, the minimum required partition numbers are 2, 3, 5, 32 for Reddit, ogbn-products, Yelp, and ogbn-papers100M, respectively.
For convenience, we here name all methods: vanilla partition-parallel training of GCNs (GCN), PipeGCN with feature gradient smoothing (PipeGCN-G), PipeGCN with feature smoothing (PipeGCN-F), and PipeGCN with both smoothing (PipeGCN-GF). The default decay rate γ for all smoothing methods is set to 0.95.
4.1 IMPROVING TRAINING THROUGHPUT OVER FULL-GRAPH TRAINING METHODS
Fig. 3 compares the training throughput between PipeGCN and the SOTA full-graph training methods (ROC (Jia et al., 2020) and CAGNET (Tripathy et al., 2020)). We observe that both vanilla partitionparallel training (GCN) and PipeGCN greatly outperform ROC and CAGNET across different number of partitions, because they avoid both the expensive CPU-GPU swaps (ROC) and the redundant node broadcast (CAGNET). Specifically, GCN is 3.1×∼16.4× faster than ROC and 2.1×∼10.2× faster than CAGNET (c=2). PipeGCN further improves upon GCN, achieving a throughput improvement of 5.6×∼28.5× over ROC and 3.9×∼17.7× over CAGNET (c=2)2. Note that we are not able to compare PipeGCN with NeuGraph (Ma et al., 2019), AliGraph (Zhu et al., 2019), and P 3 (Gandhi & Iyer, 2021) as their code are not publicly available. Besides, Dorylus (Thorpe et al., 2021) is not comparable, as it is not for regular GPU servers. Considering the substantial performance gap between ROC/CAGNET and GCN, we focus on comparing GCN with PipeGCN for the reminder of the section.
4.2 IMPROVING TRAINING THROUGHPUT WITHOUT COMPROMISING ACCURACY
We compare the training performance of both test score and training throughput between GCN and PipeGCN in Tab. 4. We can see that PipeGCN without smoothing already achieves a comparable test score with the vanilla GCN training on both Reddit and Yelp, and incurs only a negligible accuracy drop (-0.08%∼-0.23%) on ogbn-products, while boosting the training throughput by 1.72×∼2.16× across all datasets and different number of partitions3, thus validating the effectiveness of PipeGCN.
With the proposed smoothing method plugged in, PipeGCN-G/F/GF is able to compensate the dropped score of vanilla PipeGCN, achieving an equal or even better test score as/than the vanilla GCN training (without staleness), e.g., 97.14% vs. 97.11% on Reddit, 79.36% vs. 79.14% on ogbn-products and 65.28% vs. 65.26% on Yelp. Meanwhile, PipeGCN-G/F/GF enjoys a similar throughput improvement as vanilla PipeGCN, thus validating the negligible overhead of the proposed smoothing method. Therefore, pipelined transfer of features and gradients greatly improves the training throughput while maintaining the full-graph accuracy.
Note that our distributed GCN training methods consistently achieve higher test scores than SOTA sampling-based methods for GraphSAGE-based models reported in (Zeng et al., 2020) and (Hu et al., 2020), confirming that full-graph training is preferred to obtain better GCN models. For example, the best sampling-based method achieves a 96.6% accuracy on Reddit (Zeng et al., 2020) while full-graph GCN training achieves 97.1%, and PipeGCN improves the accuracy by 0.28% over sampling-based GraphSAGE models on ogbn-products (Hu et al., 2020). This advantage of full-graph training is also validated by recent works (Jia et al., 2020; Tripathy et al., 2020; Liu et al., 2022; Wan et al., 2022).
2More detailed comparisons among full-graph training methods can be found in Appendix B. 3More details regarding PipeGCN’s advantages in training throughput can be found in Appendix C.
4.3 MAINTAINING CONVERGENCE SPEED
To understand PipeGCN’s influence on the convergence speed, we compare the training curve among different methods in Fig. 4. We observe that the convergence of PipeGCN without smoothing is still comparable with that of the vanilla GCN training, although PipeGCN converges slower at the early phase of training and then catches up at the later phase, due to the staleness of boundary features/gradients. With the proposed smoothing methods, PipeGCN-G/F boosts the convergence substantially and matches the convergence speed of vanilla GCN training. There is no clear difference between PipeGCN-G and PipeGCN-F. Lastly, with combined smoothing of features and gradients, PipeGCN-GF can acheive the same or even slightly better convergence speed as vanilla GCN training (e.g., on Reddit) but can overfit gradually similar to the vanilla GCN training, which is further investigated in Sec. 4.4. Therefore, PipeGCN maintains the convergence speed w.r.t the number of epochs while reduces the end-to-end training time by around 50% thanks to its boosted training throughput (see Tab. 4).
4.4 BENEFIT OF STALENESS SMOOTHING METHOD
Error Reduction and Convergence Speedup. To understand why the proposed smoothing technique (Sec. 3.4) speeds up convergence, we compare the error incurred by the stale communication between PipeGCN and PipeGCN-G/F. The error is calculated as the Frobenius-norm of the gap between the correct gradient/feature and the stale gradient/feature used in PipeGCN training. Fig. 5 compares the error at each GCN layer. We can see that the proposed smoothing technique (PipeGCNG/F) reduces the error of staleness substantially (from the base version of PipeGCN) and this benefit consistently holds across different layers in terms of both feature and gradient errors, validating the effectiveness of our smoothing method and explaining its improvement to the convergence speed.
Overfitting Mitigation. To understand the effect of staleness smoothing on model overfitting, we also evaluate the test-accuracy convergence under different decay rates γ in Fig. 6. Here ogbnproducts is adopted as the study case because the distribution of its test set largely differs from that of its training set. From Fig. 6, we observe that smoothing with a large γ (0.7/0.95) offers a fast convergence, i.e., close to the vanilla GCN training, but overfits rapidly. To understand this issue, we
further provide detailed comparisons of the errors incurred under different γ in Fig. 7. We can see that a larger γ enjoys lower approximation errors and makes the gradients/features more stable, thus improving the convergence speed. The increased stability on the training set, however, constrains the model from exploring a more general minimum point on the test set, thus leading to overfitting as the vanilla GCN training. In contrast, a small γ (0 ∼ 0.5) mitigates this overfitting and achieves a better accuracy (see Fig. 6). But a too-small γ (e.g., 0) gives a high error for both stale features and gradients (see Fig. 7), thus suffering from a slower convergence. Therefore, a trade-off between convergence speed and achievable optimality exists between different smoothing decay rates, and γ = 0.5 combines the best of both worlds in this study.
4.5 SCALING LARGE GRAPH TRAINING OVER MULTIPLE SERVERS
To further test the capability of PipeGCN, we scale up the graph size to ogbn-papers100M and train GCN over multiple GPU servers with 32 GPUs. Tab. 5 shows that even at such a large-scale setting where communication overhead dominates, PipeGCN still reduce communication time by 61%, leading to a total training time reduction of 38% compared to the vanilla GCN baseline 4.
5 CONCLUSION
In this work, we propose a new method, PipeGCN, for efficient full-graph GCN training. PipeGCN pipelines communication with computation in distributed GCN training to hide the prohibitive communication overhead. More importantly, we are the first to provide convergence analysis for GCN training with both stale features and feature gradients, and further propose a light-weight smoothing method for convergence speedup. Extensive experiments validate the advantages of PipeGCN over both vanilla GCN training (without staleness) and state-of-the-art full-graph training.
4More experiments on multi-server training can be found in Appendix E.
6 ACKNOWLEDGEMENT
The work is supported by the National Science Foundation (NSF) through the MLWiNS program (Award number: 2003137), the CC∗ Compute program (Award number: 2019007), and the NeTS program (Award number: 1801865).
A CONVERGENCE PROOF
In this section, we prove the convergence of PipeGCN. Specifically, we first figure out that when the model is updated via gradient descent, the change of intermediate features and their gradients are bounded by a constant which is proportional to learning rate η under standard assumptions. Based on this, we further demonstrate that the error occurred by the staleness is proportional to η, which guarantees that the gradient error is bounded by ηE where E is defined in Corollary A.10, and thus PipeGCN converges in O(ε− 32 ) iterations.
A.1 NOTATIONS AND ASSUMPTIONS
For a given graph G = (V, E) with an adjacency matrixA, feature matrixX , we define the propagation matrix P as P := D̃−1/2ÃD̃−1/2, where à = A+ I, D̃u,u = ∑ v Ãu,v. One GCN layer performs one step of feature propagation (Kipf & Welling, 2016) as formulated below
H(0) = X
Z(`) = PH(`−1)W (`)
H(`) = σ(Z(`))
where H(`), W (`), and Z(`) denote the embedding matrix, the trainable weight matrix, and the intermediate embedding matrix in the `-th layer, respectively, and σ denotes a non-linear activation function. For an L-layer GCN, the loss function is denoted by L(θ) where θ = vec[W (1),W (2), · · · ,W (L)]. We define the `-th layer as a function f (`)(·, ·).
f (`)(H(`−1),W (`)) := σ(PH(`−1)W (`))
Its gradient w.r.t. the input embedding matrix can be represented as
J (`−1) = ∇Hf (`)(J (`), H(`−1),W (`)) := P>M (`)[W (`)]>
and its gradient w.r.t. the weight can be represented as
G(`) = ∇W f (`)(J (`), H(`−1),W (`)) := [PH(`−1)]>M (`)
where M (`) = J (`) ◦ σ′(PH(`−1)W (`)) and ◦ denotes Hadamard product. For partition-parallel training, we can split P into two parts P = Pin+Pbd where Pin represents intrapartition propagation and Pbd denotes inter-partition propagation. For PipeGCN, we can represent one GCN layer as below
H̃(t,0) = X
Z̃(t,`) = PinH̃ (t,`−1)W̃ (t,`) + PbdH̃ (t−1,`−1)W̃ (t,`)
H̃(t,`) = σ(Z̃(t,`))
where t is the epoch number and W̃ (t,`) is the weight at epoch t layer `. We define the loss function for this setting as L̃(θ̃(t)) where θ̃(t) = vec[W̃ (t,1), W̃ (t,2), · · · , W̃ (t,L)]. We can also summarize the layer as a function f̃ (t,`)(·, ·)
f̃ (t,`)(H̃(t,`−1), W̃ (t,`)) := σ(PinH̃ (t,`−1)W̃ (t,`) + PbdH̃ (t−1,`−1)W̃ (t,`))
Note that H̃(t−1,`−1) is not a part of the input of f̃ (t,`)(·, ·) because it is a constant for the t-th epoch. The corresponding backward propagation follows the following computation
J̃ (t,`−1) = ∇H f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`))
G̃(t,`) = ∇W f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) where
M̃ (t,`) = J̃ (t,`) ◦ σ′(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))
∇H f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) := P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]>
∇W f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) := [PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`)
Again, J̃ (t−1,`) is not a part of the input of∇H f̃ (t,`)(·, ·, ·) or∇W f̃ (t,`)(·, ·, ·) because it is a constant for epoch t. Finally, we define∇L̃(θ̃(t)) = vec[G̃(t,1), G̃(t,2), · · · , G̃(t,L)]. It should be highlighted that the ‘gradient’ ∇H f̃ (t,`)(·, ·, ·), ∇W f̃ (t,`)(·, ·, ·) and ∇L̃(θ̃(t)) are not the standard gradient for the corresponding forward process due to the stale communication. Properties of gradient cannot be directly applied to these variables.
Before proceeding our proof, we make the following standard assumptions about the adopted GCN architecture and input graph. Assumption A.1. The loss function Loss(·, ·) is Closs-Lipschitz continuous and Lloss-smooth w.r.t. to the input node embedding vector, i.e., |Loss(h(L), y)− Loss(h′(L), y)| ≤ Closs‖h(L) − h′(L)‖2 and ‖∇Loss(h(L), y)−∇Loss(h′(L), y)‖2 ≤ Lloss‖h(L) − h′(L)‖2 where h is the predicted label and y is the correct label vector.
Assumption A.2. The activation function σ(·) is Cσ-Lipschitz continuous and Lσ-smooth, i.e., ‖σ(z(`))− σ(z′(`))‖2 ≤ Cσ‖z(`) − z′(`)‖2 and ‖σ′(z(`))− σ′(z′(`))‖2 ≤ Lσ‖z(`) − z′(`)‖2. Assumption A.3. For any ` ∈ [L], the norm of weight matrices, the propagation matrix, and the input feature matrix are bounded: ‖W (`)‖F ≤ BW , ‖P‖F ≤ BP , ‖X‖F ≤ BX . (This generic assumption is also used in (Chen et al., 2018; Liao et al., 2020; Garg et al., 2020; Cong et al., 2021).)
A.2 BOUNDED MATRICES AND CHANGES
Lemma A.1. For any ` ∈ [L], the Frobenius norm of node embedding matrices, gradient passing from the `-th layer node embeddings to the (`− 1)-th, gradient matrices are bounded, i.e.,
‖H(`)‖F , ‖H̃(t,`)‖F ≤ BH ,
‖J (`)‖F , ‖J̃ (t,`)‖F ≤ BJ ,
‖M (`)‖F , ‖M̃ (t,`)‖F ≤ BM ,
‖G(`)‖F , ‖G̃(t,`)‖F ≤ BG where
BH = max 1≤`≤L
(CσBPBW ) `BX
BJ = max 2≤`≤L
(CσBPBW ) L−`Closs
BM = CσBJ
BG = BPBHBM
Proof. The proof of ‖H(`)‖F ≤ BH and ‖J (`)‖F ≤ BJ can be found in Proposition 1 in (Cong et al., 2021). By induction,
‖H̃(t,`)‖F = ‖σ(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))‖F ≤ CσBW ‖Pin + Pbd‖F (CσBPBW )`−1BX ≤ (CσBPBW )`BX
‖J̃ (t,`−1)‖F = ∥∥∥P>in (J̃ (t,`) ◦ σ′(Z̃(t,`))) [W̃ (t,`)]> + P>bd (J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))) [W̃ (t−1,`)]>∥∥∥
F
≤ CσBW ‖Pin + Pbd‖F (CσBPBW )L−`Closs ≤ (CσBPBW )L−`+1Closs
‖M (`)‖F = ‖J (`) ◦ σ′(Z(`))‖F ≤ CσBJ ‖M̃ (t,`)‖F = ‖J̃ (t,`) ◦ σ′(Z̃(t,`))‖F ≤ CσBJ
G(`) = [PH(`−1)]>M (`)
≤ BPBHBM
G̃(t,`) = [PinH̃ (t,`−1) + PbdH̃ (t−1,`−1)]>M̃ (t,`)
≤ BPBHBM
Because the gradient matrices are bounded, the weight change is bounded.
Corollary A.2. For any t, `, ‖W̃ (t,`) − W̃ (t−1,`)‖F ≤ B∆W = ηBG where η is the learning rate.
Now we can analyze the changes of intermediate variables.
Lemma A.3. For any t, `, we have ‖Z̃(t,`) − Z̃(t−1,`)‖F ≤ B∆Z , ‖H̃(t,`) − H̃(t−1,`)‖F ≤ B∆H , where B∆Z = L−1∑ i=0 CiσB i+1 P B i WBHB∆W and B∆H = CσB∆Z .
Proof. When ` = 0, ‖H̃(t,0)− H̃(t−1,0)‖F = ‖X−X‖F = 0. Now we consider ` > 0 by induction.
‖Z̃(t,`) − Z̃(t−1,`)‖F =‖(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))
− (PinH̃(t−1,`−1)W̃ (t−1,`) + PbdH̃(t−2,`−1)W̃ (t−1,`))‖F =‖Pin(H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`))
+ Pbd(H̃ (t−1,`−1)W̃ (t,`) − H̃(t−2,`−1)W̃ (t−1,`))‖F
Then we analyze the bound of ‖H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`)‖F which is denoted by s(t,`).
s(t,`) ≤ ‖H̃(t,`−1)W̃ (t,`) − H̃(t,`−1)W̃ (t−1,`)‖F + ‖H̃(t,`−1)W̃ (t−1,`) − H̃(t−1,`−1)W̃ (t−1,`)‖F ≤ BH‖W̃ (t,`) − W̃ (t−1,`)‖F +BW ‖H̃(t,`−1) − H̃(t−1,`−1)‖F
According to Corollary A.2, ‖W̃ (t,`) − W̃ (t−1,`)‖F ≤ B∆W . By induction, ‖H̃(t,`−1) − H̃(t−1,`−1)‖F ≤ `−2∑ i=0 Ci+1σ B i+1 P B i WBHB∆W . Combining these inequalities,
s(t,`) ≤ BHB∆W + `−1∑ i=1 CiσB i PB i WBHB∆W
Plugging it back, we have
‖Z̃(t,`) − Z̃(t−1,`)‖F ≤‖Pin(H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`))
+ Pbd(H̃ (t−1,`−1)W̃ (t,`) − H̃(t−2,`−1)W̃ (t−1,`))‖F
≤BP ( BHB∆W +
`−1∑ i=1 CiσB i PB i WBHB∆W
)
= `−1∑ i=0 CiσB i+1 P B i WBHB∆W
‖H̃(t,`) − H̃(t−1,`)‖F =‖σ(Z̃(t,`))− σ(Z̃(t−1,`))‖F ≤Cσ‖Z̃(t,`) − Z̃(t−1,`)‖F ≤CσB∆Z
Lemma A.4. ‖J̃ (t,`) − J̃ (t−1,`)‖F ≤ B∆J where
B∆J = max 2≤`≤L
(BPBWCσ) L−`B∆HLloss + (BMB∆W + LσBJB∆ZBW ) L−3∑ i=0 Bi+1P B i WC i σ
Proof. For the last layer (` = L), ‖J̃ (t,L)− J̃ (t−1,L)‖F ≤ Lloss‖H̃(t,L)− H̃(t−1,L)‖F ≤ LlossB∆H . For the case of ` < L, we prove the lemma by using induction.
‖J̃ (t,`−1) − J̃ (t−1,`−1)‖F = ∥∥∥(P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]>) − ( P>inM̃ (t−1,`)[W̃ (t−1,`)]> + P>bdM̃ (t−2,`)[W̃ (t−2,`)]> )∥∥∥ F
≤ ∥∥∥P>in (M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>)∥∥∥
F + ∥∥∥P>bd (M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−2,`)[W̃ (t−2,`)]>)∥∥∥
F
We denote ∥∥∥M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>∥∥∥
F by s(t,`) and analyze its bound.
s(t,`) ≤ ∥∥∥M̃ (t,`)[W̃ (t,`)]> − M̃ (t,`)[W̃ (t−1,`)]>∥∥∥
F + ∥∥∥M̃ (t,`)[W̃ (t−1,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>∥∥∥
F ≤BM ∥∥∥[W̃ (t,`)]> − [W̃ (t−1,`)]>∥∥∥
F +BW ∥∥∥M̃ (t,`) − M̃ (t−1,`)∥∥∥ F
According to Corollary A.2, ∥∥∥[W̃ (t,`)]> − [W̃ (t−1,`)]>∥∥∥
F ≤ B∆W . For the second term,
‖M̃ (t,`) − M̃ (t−1,`)‖F =‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))‖F ≤‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t,`) ◦ σ′(Z̃(t−1,`))‖F + ‖J̃ (t,`) ◦ σ′(Z̃(t−1,`))− J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))‖F ≤BJ‖σ′(Z̃(t,`))− σ′(Z̃(t−1,`))‖F + Cσ‖J̃ (t,`) − J̃ (t−1,`)‖F (5)
According to the smoothness of σ and Lemma A.3, ‖σ′(Z̃(t,`)) − σ′(Z̃(t−1,`))‖F ≤ LσB∆Z . By induction,
‖J̃ (t,`) − J̃ (t−1,`)‖F
≤ (BPBWCσ)(L−`)B∆HLloss + (BMB∆W + LσBJB∆ZBW ) L−`−1∑ i=0 Bi+1P B i WC i σ
As a result,
s(t,`) ≤BMB∆W +BWBJLσB∆Z +BWCσ‖J̃ (t,`) − J̃ (t−1,`)‖F =(BMB∆W +BWBJLσB∆Z) +B (L−`) P B (L−`+1) W C (L−`+1) σ B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=1 BiPB i WC i σ
≤B(L−`)P B (L−`+1) W C (L−`+1) σ B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=0 BiPB i WC i σ
‖J̃ (t,`−1) − J̃ (t−1,`−1)‖F = ∥∥∥P>in (M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>)∥∥∥
F + ∥∥∥P>bd (M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−2,`)[W̃ (t−2,`)]>)∥∥∥
F
≤BP s(t,`)
≤(BPBWCσ)(L−`+1)B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=0 Bi+1P B i WC i σ
From Equation 5, we can also conclude that
Corollary A.5. ‖M̃ (t,`) − M̃ (t−1,`)‖F ≤ B∆M with B∆M = BJLσB∆Z + CσB∆J .
A.3 BOUNDED FEATURE ERROR AND GRADIENT ERROR
In this subsection, we compare the difference between generic GCN and PipeGCN with the same parameter set, i.e., θ = θ̃(t).
Lemma A.6. ‖Z̃(t,`)−Z(`)‖F ≤ EZ ,‖H̃(t,`)−H(`)‖F ≤ EH whereEZ = B∆H L∑ i=1 Ci−1σ B i WB i P
and EH = B∆H L∑ i=1 (CσBWBP ) i.
Proof.
‖Z̃(t,`) − Z(`)‖F = ‖(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))− (PH(`−1)W (`))‖F ≤ ‖(PinH̃(t,`−1) + PbdH̃(t−1,`−1) − PH(`−1))W (`)‖F = BW ‖P (H̃(t,`−1) −H(`−1)) + Pbd(H̃(t−1,`−1) − H̃(t,`−1))‖F
≤ BWBP ( ‖H̃(t,`−1) −H(`−1)‖F +B∆H ) By induction, we assume that ‖H̃(t,`−1) −H(`−1)‖F ≤ B∆H
`−1∑ i=1 (CσBWBP ) i. Therefore,
‖Z̃(t,`) − Z(`)‖F ≤ BWBPB∆H `−1∑ i=0 (CσBWBP ) i
= B∆H ∑̀ i=1 Ci−1σ B i WB i P
‖H̃(t,`) −H(`)‖F = ‖σ(Z̃(t,`))− σ(Z(`))‖F ≤ Cσ‖Z̃(t,`) − Z(`)‖F
≤ B∆H ∑̀ i=1 (CσBWBP ) i
Lemma A.7. ‖J̃ (t,`) − J (`)‖F ≤ EJ and ‖M̃ (t,`) −M (`)‖F ≤ EM with
EJ = max 2≤`≤L
(BPBWCσ) L−`LlossEH+BP (BW (BJEZLσ+B∆M )+B∆WBM ) L−3∑ i=0 (BPBWCσ) i
EM = CσEJ + LσBJEZ
Proof. When ` = L, ‖J̃ (t,L) − J (L)‖F ≤ LlossEH . For any `, we assume that
‖J̃ (t,`) − J (`)‖F ≤ (BPBWCσ)L−`LlossEH + U L−`−1∑ i=0 (BPBWCσ) i (6)
‖M̃ (t,`) −M (`)‖F ≤ (BPBWCσ)L−`CσLlossEH + UCσ L−`−1∑ i=0 (BPBWCσ) i + LσBJEZ (7)
where U = BP (BWBJEZLσ +B∆WBM +BWB∆M ). We prove them by induction as follows.
‖M̃ (t,`) −M (`)‖F = ‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J (`) ◦ σ′(Z(`))‖F ≤ ‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t,`) ◦ σ′(Z(`))‖F + ‖J̃ (t,`) ◦ σ′(Z(`))− J (`) ◦ σ′(Z(`))‖F ≤ BJ‖σ′(Z̃(t,`))− σ′(Z(`))‖F + Cσ‖J̃ (t,`) − J (`)‖F
Here ‖σ′(Z̃(t,`))− σ′(Z(`))‖F ≤ LσEZ . With Equation 6,
‖M̃ (t,`) −M (`)‖F ≤ (BPBWCσ)L−`CσLlossEH + UCσ L−`−1∑ i=0 (BPBWCσ) i + LσBJEZ
On the other hand, ‖J̃ (t,`−1) − J (`−1)‖F = ‖P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]> − P>M (`)[W (`)]>‖F = ‖P>(M̃ (t,`) −M (`))[W (`)]> + P>bd(M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>)‖F ≤ ‖P>(M̃ (t,`) −M (`))[W (`)]>‖F + ‖P>bd(M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>)‖F ≤ BPBW ‖M̃ (t,`) −M (`)‖F +BP ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F
The first part is bounded by Equation 7. For the second part,
‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F ≤ ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−1,`)[W̃ (t,`)]>‖F + ‖M̃ (t−1,`)[W̃ (t,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F ≤ B∆WBM +BWB∆M
Therefore, ‖J̃ (t,`−1) − J (`−1)‖F ≤ BPBW ‖M̃ (t,`) −M (`)‖F +BP ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F
≤ (BPBWCσ)L−`+1LlossEH + U L−∑̀ i=1 (BPBWCσ) i + U
= (BPBWCσ) L−`+1LlossEH + U L−∑̀ i=0 (BPBWCσ) i
Lemma A.8. ‖G̃(t,`) −G(`)‖F ≤ EG where EG = BP (BHEM +BMEH)
Proof.
‖G̃(t,`) −G(`)‖F = ∥∥∥[PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`) − [PH(`)]>M (`)∥∥∥
F ≤ ∥∥∥[PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`) − [PH(`−1)]>M̃ (t,`)∥∥∥
F + ∥∥∥[PH(`−1)]>M̃ (t,`) − [PH(`−1)]>M (`)∥∥∥
F
≤BM (‖P (H̃(t,`−1) −H(`−1)) + Pbd(H̃(t−1,`−1) − H̃(t,`−1))‖F ) +BPBHEM ≤BMBP (EH +B∆H) +BPBHEM
By summing up from ` = 1 to ` = L to both sides, we have
Corollary A.9. ‖∇L̃(θ)−∇L(θ)‖2 ≤ Eloss where Eloss = LEG.
According to the derivation of Eloss, we observe that Eloss contains a factor η. To simplify the expression of Eloss, we assume that BPBWCσ ≤ 12 without loss of generality, and rewrite Corollary A.9 as the following.
Corollary A.10. ‖∇L̃(θ)−∇L(θ)‖2 ≤ ηE where
E = 1
8 LB3PB 2 XClossCσ
( 3BXC 2 σLloss + 6BXClossLσ + 10ClossC 2 σ ) A.4 PROOF OF THE MAIN THEOREM
We first introduce a lemma before the proof of our main theorem. Lemma A.11 (Lemma 1 in (Cong et al., 2021)). An L-layer GCN is Lf -Lipschitz smoothness, i.e., ‖∇L(θ1)−∇L(θ2)‖2 ≤ Lf‖θ1 − θ2‖2.
Now we prove the main theorem. Theorem A.12 (Convergence of PipeGCN, formal). Under Assumptions A.1, A.2, and A.3, we can derive the following by choosing a learning rate η = √ ε E and number of training iterations T = (L(θ(1))− L(θ∗))Eε− 32 : 1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ 3ε
where E is defined in Corollary A.10, ε > 0 is an arbitrarily small constant, L(·) is the loss function, θ(t) and θ∗ represent the parameter vector at iteration t and the optimal parameter respectively.
Proof. With the smoothness of the model, L(θ(t+1)) ≤ L(θ(t)) + 〈 ∇L(θ(t)), θ(t+1) − θ(t) 〉 + Lf 2 ‖θ(t+1) − θ(t)‖22
= L(θ(t))− η 〈 ∇L(θ(t)),∇L̃(θ(t)) 〉 + η2Lf 2 ‖∇L̃(θ(t))‖22
Let δ(t) = ∇L̃(θ(t))−∇L(θ(t)) and η ≤ 1/Lf , we have L(θ(t+1)) ≤ L(θ(t))− η 〈 ∇L(θ(t)),∇L(θ(t)) + δ(t) 〉 + η
2 ‖∇L(θ(t)) + δ(t)‖22
≤ L(θ(t))− η 2 ‖∇L(θ(t))‖22 + η 2 ‖δ(t)‖22
From Corollary A.10 we know that ‖δ(t)‖2 < ηE. After rearranging the terms,
‖∇L(θ(t))‖22 ≤ 2
η (L(θ(t))− L(θ(t+1))) + η2E2
Summing up from t = 1 to T and taking the average,
1
T T∑ t=1 ‖∇L(θ(t))‖22 ≤ 2 ηT (L(θ(1))− L(θ(T+1))) + η2E2
≤ 2 ηT (L(θ(1))− L(θ∗)) + η2E2
where θ∗ is the minimum point of L(·). By taking η = √ ε E and T = (L(θ
(1))− L(θ∗))Eε− 32 with an arbitrarily small constant ε > 0, we have
1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ 3ε
B TRAINING TIME BREAKDOWN OF FULL-GRAPH TRAINING METHODS
To understand why PipeGCN significantly boosts the training throughput over full-graph training methods, we provide the detailed time breakdown in Tab. 6 using the same model as Tab. 3 (4-layer GraphSAGE, 256 hidden units), in which “GCN” denotes the vanilla partition-parallel training illustrated in Fig. 1(a). We observe that PipeGCN greatly saves communication time.
C TRAINING TIME IMPROVEMENT BREAKDOWN OF PIPEGCN
To understand the training time improvement offered by PipeGCN, we further breakdown the epoch time into three parts (intra-partition computation, inter-partition communication, and reduce for aggregating model gradient) and provide the result in Fig. 8. We can observe that: 1) interpartition communication dominates the training time in vanilla partition-parallel training (GCN); 2) PipeGCN (with or without smoothing) greatly hides the communication overhead across different number of partitions and all datasets, e.g., the communication time is hidden completely in 2-partition Reddit and almost completely in 3-partition Yelp, thus the substantial reduction in training time; and 3) the proposed smoothing incurs only minimal overhead (i.e., minor difference between PipeGCN and PipeGCN-GF). Lastly, we also notice that when communication ratio is extremely large (85%+), PipeGCN hides communication significantly but not completely (e.g., 10-partition ogbn-products), in which case we can employ those compression and quantization techniques (Alistarh et al. (2017); Seide et al. (2014); Wen et al. (2017); Li et al. (2018a); Yu et al. (2018)) from the area of general distributed SGD for further reducing the communication, as the compression is orthogonal to the pipeline method. Besides compression, we can also increase the pipeline depth of PipeGCN, e.g., using two iterations of compute to hide one iteration of communication, which is left to our future work.
D MAINTAINING CONVERGENCE SPEED (ADDITIONAL EXPERIMENTS)
We provide the additional convergence curves on Yelp in Fig. 9. We can see that PipeGCN and its variants maintain the convergence speed w.r.t the number of epochs while substantially reducing the end-to-end training time.
E SCALING GCN TRAINING OVER MULTIPLE GPU SERVERS
We also scale up PipeGCN training over multiple GPU servers (each contains AMD Radeon Instinct MI60 GPUs, an AMD EPYC 7642 CPU, and 48 lane PCI 3.0 connecting CPU-GPU and GPU-GPU) networked with 10Gbps Ethernet.
The accuracy results of PipeGCN and its variants are summarized in Tab. 7:
Furthermore, we provide PipeGCN’s speedup against vanilla partition-parallel training in Tab. 8:
From the two tables above, we can observe that our PipeGCN family consistently maintains the accuracy of the full-graph training, while improving the throughput by 15%∼66% regardless of the machine settings and number of partitions.
F IMPLEMENTATION DETAILS
We discuss the details of the effective and efficient implementation of PipeGCN in this section.
First, for parallel communication and computation, a second cudaStream is required for communication besides the default cudaStream for computation. To also save memory buffers for communication, we batch all communication (e.g., from different layers) into this second cudaStream. When the popular communication backend, Gloo, is used, we parallelize the CPU-GPU transfer with CPU-CPU transfer.
Second, when Dropout layer is used in GCN model, it should be applied after communication. The implementation of the dropout layer for PipeGCN should be considered carefully so that the dropout mask remains consistent for the input tensor and corresponding gradient. If the input feature passes through the dropout layer before being communicated, during the backward phase, the dropout mask is changed and the gradient of masked values is involved in the computation, which introduces noise to the calculation of followup gradients. As a result, the dropout layer can only be applied after receiving boundary features. | 1. What is the focus of the paper regarding graph convolutional networks (GCNs)?
2. What are the strengths of the proposed approach, particularly in terms of efficiency?
3. What are the weaknesses of the paper, especially regarding the convergence proof and limitations?
4. Do you have any concerns about the assumptions made in the paper?
5. How does the reviewer assess the clarity and quality of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a distributed full-graph GCN training method to speed up GCN training for large-scale graphs. Experiments demonstrate its performance and efficiency. Convergence proof is also provided.
Review
This paper proposes a distributed GCN training on large graphs named PipeGCN. Specifically, the authors hide the communication time by parallelizing the communication and computation process in each layer and using the stale information for parameter updates. The idea seemed simple and straightforward, and experiments show that the algorithm can achieve up to 2.2x speedup. The authors provide the convergence proof for PipeGCN and propose two smoothing methods for faster convergence.
Several limitations of this paper are listed as follows:
The proof of convergence should be justified better. The author didn’t claim how can the proposed model satisfy Assumptions 3.1-3.3, such as whether the loss function they chose satisfies the Lipschitz continuous condition mentioned in Assumption 3.1. In addition, the convergence proof is not applicable if PipeGCN uses the most commonly used ReLU activation function, as ReLU doesn’t satisfy Assumption 3.2.
Some statements in the paper are not clear enough. For example, in Table 1 the authors didn’t mention AllReduce of Weight Gradient PipeGCN, whereas it appears in line 32 in Algorithm 1. |
ICLR | Title
PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Abstract
Graph Convolutional Networks (GCNs) is the state-of-the-art method for learning graph-structured data, and training large-scale GCNs requires distributed training across multiple accelerators such that each accelerator is able to hold a partitioned subgraph. However, distributed GCN training incurs prohibitive overhead of communicating node features and feature gradients among partitions for every GCN layer during each training iteration, limiting the achievable training efficiency and model scalability. To this end, we propose PipeGCN, a simple yet effective scheme that hides the communication overhead by pipelining inter-partition communication with intra-partition computation. It is non-trivial to pipeline for efficient GCN training, as communicated node features/gradients will become stale and thus can harm the convergence, negating the pipeline benefit. Notably, little is known regarding the convergence rate of GCN training with both stale features and stale feature gradients. This work not only provides a theoretical convergence analysis but also finds the convergence rate of PipeGCN to be close to that of the vanilla distributed GCN training without any staleness. Furthermore, we develop a smoothing method to further improve PipeGCN’s convergence. Extensive experiments show that PipeGCN can largely boost the training throughput (1.7×∼28.5×) while achieving the same accuracy as its vanilla counterpart and existing full-graph training methods. The code is available at https://github.com/RICE-EIC/PipeGCN.
1 INTRODUCTION
Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016) have gained great popularity recently as they demonstrated the state-of-the-art (SOTA) performance in learning graph-structured data (Zhang & Chen, 2018; Xu et al., 2018; Ying et al., 2018). Their promising performance is resulting from their ability to capture diverse neighborhood connectivity. In particular, a GCN aggregates all features from the neighbor node set for a given node, the feature of which is then updated via a multi-layer perceptron. Such a two-step process (neighbor aggregation and node update) empowers GCNs to better learn graph structures. Despite their promising performance, training GCNs at scale is still a challenging problem, as a prohibitive amount of compute and memory resources are required to train a real-world large-scale graph, let alone exploring deeper and more advanced models. To overcome this challenge, various sampling-based methods have been proposed to reduce the resource requirement at a cost of incurring feature approximation errors. A straightforward instance is to create mini-batches by sampling neighbors (e.g., GraphSAGE (Hamilton et al., 2017) and VR-GCN (Chen et al., 2018)) or to extract subgraphs as training samples (e.g., Cluster-GCN (Chiang et al., 2019) and GraphSAINT (Zeng et al., 2020)).
In addition to sampling-based methods, distributed GCN training has emerged as a promising alternative, as it enables large full-graph training of GCNs across multiple accelerators such as GPUs.
This approach first partitions a giant graph into multiple small subgraps, each of which is able to fit into a single GPU, and then train these partitioned subgraphs locally on GPUs together with indispensable communication across partitions. Following this direction, several recent works (Ma et al., 2019; Jia et al., 2020; Tripathy et al., 2020; Thorpe et al., 2021; Wan et al., 2022) have been proposed and verified the great potential of distributed GCN training. P 3 (Gandhi & Iyer, 2021) follows another direction that splits the data along the feature dimension and leverages intra-layer model parallelism for training, which shows superior performance on small models.
In this work, we propose a new method for distributed GCN training, PipeGCN, which targets achieving a full-graph accuracy with boosted training efficiency. Our main contributions are following:
• We first analyze two efficiency bottlenecks in distributed GCN training: the required significant communication overhead and frequent synchronization, and then propose a simple yet effective technique called PipeGCN to address both aforementioned bottlenecks by pipelining inter-partition communication with intra-partition computation to hide the communication overhead.
• We address the challenge raised by PipeGCN, i.e., the resulting staleness in communicated features and feature gradients (neither weights nor weight gradients), by providing a theoretical convergence analysis and showing that PipeGCN’s convergence rate is O(T− 23 ), i.e., close to vanilla distributed GCN training without staleness. To the best of our knowledge, we are the first to provide a theoretical convergence proof of GCN training with both stale feature and stale feature gradients.
• We further propose a low-overhead smoothing method to further improve PipeGCN’s convergence by reducing the error incurred by the staleness.
• Extensive empirical and ablation studies consistently validate the advantages of PipeGCN over both vanilla distributed GCN training and those SOTA full-graph training methods (e.g., boosting the training throughput by 1.7×∼28.5× while achieving the same or a better accuracy).
2 BACKGROUND AND RELATED WORKS
Graph Convolutional Networks. GCNs represent each node in a graph as a feature (embedding) vector and learn the feature vector via a two-step process (neighbor aggregation and then node update) for each layer, which can be mathematically described as:
z(`)v = ζ (`) ({ h(`−1)u | u ∈ N (v) }) (1)
h(`)v = φ (`) ( z(`)v , h (`−1) v ) (2)
where N (v) is the neighbor set of node v in the graph, h(`)v represents the learned embedding vector of node v at the `-th layer, z(`)v is an intermediate aggregated feature calculated by an aggregation function ζ(`), and φ(`) is the function for updating the feature of node v. The original GCN (Kipf & Welling, 2016) uses a weighted average aggregator for ζ(`) and the update function φ(`) is a single-layer perceptron σ(W (`)z(`)v ) where σ(·) is a non-linear activation function and W (`) is a weight matrix. Another famous GCN instance is GraphSAGE (Hamilton et al., 2017) in which φ(`) is σ ( W (`) · CONCAT ( z (`) v , h (`−1) v )) .
Distributed Training for GCNs. A real-world graph can contain millions of nodes and billions of edges (Hu et al., 2020), for which a feasible training approach is to partition it into small subgraphs (to fit into each GPU’s resource), and train them in parallel, during which necessary communication is performed to exchange boundary node features and gradients to satisfy GCNs’s neighbor aggregation (Equ. 1). Such an approach is called vanilla partition-parallel training and is illustrated in Fig. 1 (a). Following this approach, several works have been proposed recently. NeuGraph (Ma et al., 2019), AliGraph (Zhu et al., 2019), and ROC (Jia et al., 2020) perform such partition-parallel training but rely on CPUs for storage for all partitions and repeated swapping of a partial partition to GPUs. Inevitably, prohibitive CPU-GPU swaps are incurred, plaguing the achievable training efficiency. CAGNET (Tripathy et al., 2020) is different in that it splits each node feature vector into tiny sub-vectors which are then broadcasted and computed sequentially, thus requiring redundant communication and frequent synchronization. Furthermore, P 3 (Gandhi & Iyer, 2021) proposes to split both the feature and the GCN layer for mitigating the communication overhead, but it makes a strong assumption that the hidden dimensions of a GCN should be considerably smaller than that of
(a) Vanilla partition-parallel training
2 5
1
6
3
4
Graph
Inner Node Boundary Node
Communicate Boundary Feature & Grad
Pipeline
Communicate Compute ...
... ... Iteration #2Iteration #1
52 61
Part 3 ...
Part 1 ...
4
2
5
1
6 4
Communicate Compute ...
Iteration #2
52 61
...
...
... ...Communicate Compute Compute
Communicate
(b) Timeline of vanilla partition-parallel training (c) PipeGCN
2
6
5
3 Part 2
Iteration #1
Timeline of (a)
3 4 52 3 4 52
... ...
... ...
Timeline of PipeGCN
Part 1 Part 2 Part 3 Partition
1 2
6 5
3
4
Figure 1: An illustrative comparison between vanilla partition-parallel training and PipeGCN.
input features, which restricts the model size. A concurrent work Dorylus (Thorpe et al., 2021) adopts a fine-grained pipeline along each compute operation in GCN training and supports asynchronous usage of stale features. Nevertheless, the resulting staleness of feature gradients is neither analyzed nor considered for convergence proof, let alone error reduction methods for the incurred staleness.
Asynchronous Distributed Training. Many prior works have been proposed for asynchronous distributed training of DNNs. Most works (e.g., Hogwild! (Niu et al., 2011), SSP (Ho et al., 2013), and MXNet (Li et al., 2014)) rely on a parameter server with multiple workers running asynchronously to hide communication overhead of weights/(weight gradients) among each other, at a cost of using stale weight gradients from previous iterations. Other works like Pipe-SGD (Li et al.,
2018b) pipeline such communication with local computation of each worker. Another direction is to partition a large model along its layers across multiple GPUs and then stream in small data batches through the layer pipeline, e.g., PipeDream (Harlap et al., 2018) and PipeMare (Yang et al., 2021). Nonetheless, all these works aim at large models with small data, where communication overhead of model weights/weight gradients are substantial but data feature communications are marginal (if not none), thus not well suited for GCNs. More importantly, they focus on convergence with stale weight gradients of models, rather than stale features/feature gradients incurred in GCN training. Tab. 1 summarizes the differences. In a nutshell, little effort has been made to study asynchronous or pipelined distributed training of GCNs, where feature communication plays the major role, let alone the corresponding theoretical convergence proofs.
GCNs with Stale Features/Feature Gradients. Several recent works have been proposed to adopt either stale features (Chen et al., 2018; Cong et al., 2020) or feature gradients (Cong et al., 2021) in single-GPU training of GCNs. Nevertheless, their convergence analysis considers only one of two kinds of staleness and derives a convergence rate of O(T− 12 ) for pure sampling-based methods. This is, however, limited in distributed GCN training as its convergence is simultaneously affected by both kinds of staleness. PipeGCN proves such convergence with both stale features and feature gradients and offers a better rate of O(T− 23 ). Furthermore, none of previous works has studied the errors incurred by staleness which harms the convergence speed, while PipeGCN develops a low-overhead smoothing method to reduce such errors.
3 THE PROPOSED PIPEGCN FRAMEWORK
Overview. To enable efficient distributed GCN training, we first identify the two bottlenecks associated with vanilla partition-parallel training: substantial communication overhead and frequently synchronized communication (see Fig. 1(b)), and then address them directly by proposing a novel strategy, PipeGCN, which pipelines the communication and computation stages across two adjacent iterations in each partition of distributed GCN training for breaking the synchrony and then hiding the communication latency (see Fig. 1(c)). It is non-trivial to achieve efficient GCN training with such a pipeline method, as staleness is incurred in communicated features/feature gradients and
more importantly little effort has been made to study the convergence guarantee of GCN training using stale feature gradients. This work takes an initial effort to prove both the theoretical and empirical convergence of such a pipelined GCN training method, and for the first time shows its convergence rate to be close to that of vanilla GCN training without staleness. Furthermore, we propose a low-overhead smoothing method to reduce the errors due to stale features/feature gradients for further improving the convergence.
3.1 BOTTLENECKS IN VANILLA PARTITION-PARALLEL TRAINING
Significant communication overhead. Fig. 1(a) illustrates vanilla partition-parallel training, where each partition holds inner nodes that come from the original graph and boundary nodes that come from other subgraphs. These boundary nodes are demanded by the neighbor aggregation of GCNs across neighbor partitions, e.g., in Fig. 1(a) node-5 needs nodes-[3,4,6] from other partitions for calculating Equ. 1. Therefore, it is the features/gradients of boundary nodes that dominate the communication overhead in distributed GCN training. Note that the amount of boundary nodes can be excessive and far exceeds the inner nodes, as the
boundary nodes are replicated across partitions and scale with the number of partitions. Besides the sheer size, communication of boundary nodes occurs for (1) each layer and (2) both forward and backward passes, making communication overhead substantial. We evaluate such overhead1 in Tab. 2 and find communication to be dominant, which is consistent with CAGNET (Tripathy et al., 2020).
Frequently synchronized communication. The aforementioned communication of boundary nodes must be finished before calculating Equ. 1 and Equ. 2, which inevitably forces synchronization between communication and computation and requires a fully sequential execution (see Fig. 1(b)). Thus, for most of training time, each partition is waiting for dominant features/gradients communication to finish before the actual compute, repeated for each layer and for both forward and backward passes.
3.2 THE PROPOSED PIPEGCN METHOD
Fig. 1(c) illustrates the high-level overview of PipeGCN, which pipelines the communicate and compute stages spanning two iterations for each GCN layer. Fig. 2 further provides the detailed end-to-end flow, where PipeGCN removes the heavy communication overhead in the vanilla approach by breaking the synchronization between communicate and compute and hiding communicate with compute of each GCN layer. This is achieved by deferring the communicate to next iteration’s compute (instead of serving the current iteration) such that compute and communicate can run in
1The detailed setting can be found in Sec. 4.
Algorithm 1: Training a GCN with PipeGCN (per-partition view). Input: partition id i, partition count n, graph partition Gi, propagation matrix Pi, node feature Xi, label Yi, boundary node set Bi, layer count L, learning rate η, initial model W0 Output: trained model WT after T iterations
1 Vi ← {node v ∈ Gi : v /∈ Bi} . create inner node set 2 Broadcast Bi and Receive [B1, · · · ,Bn] 3 [Si,1, · · · ,Si,n]← [B1 ∩ Vi, · · · ,Bn ∩ Vi] 4 Broadcast Vi and Receive [V1, · · · ,Vn] 5 [S1,i, · · · ,Sn,i]← [Bi ∩ V1, · · · ,Bi ∩ Vn]
6 H(0) ← [ Xi 0 ] . initialize node feature, set boundary feature as 0
7 for t := 1→ T do 8 for ` := 1→ L do . forward pass 9 if t > 1 then 10 wait until thread(`)f completes 11 [H (`−1) S1,i , · · · , H (`−1) Sn,i ]← [B (`) 1 , · · · , B (`) n ] . update boundary feature 12 end 13 with thread(`)f . communicate boundary features in parallel 14 Send [H(`−1)Si,1 , · · · , H (`−1) Si,n ] to partition [1, · · · , n] and Receive [B (`) 1 , · · · , B (`) n ] 15 H (`) Vi ← σ(PiH (`−1)W (`) t−1) . update inner nodes feature 16 end
17 J (L) Vi ←
∂Loss(H (L) Vi ,Yi)
∂H (L) Vi
18 for ` := L→ 1 do . backward pass 19 G
(`) i ← [ PiH (`−1) ]> ( J (`) Vi ◦ σ ′(PiH (`−1)W (`) t−1) ) . calculate weight gradient
20 if ` > 1 then 21 J(`−1) ← P>i ( J (`) Vi ◦ σ ′(PiH (`−1)W (`) t−1) ) [W (`) t−1] > . calculate feature gradient 22 if t > 1 then 23 wait until thread(`)b completes 24 for j := 1→ n do 25 J
(`−1) Si,j ← J (`−1) Si,j + C (`) j . accumulate feature gradient
26 end 27 end 28 with thread(`)b . communicate boundary feature gradient in parallel 29 Send [J(`−1)S1,i , · · · , J (`−1) Sn,i ] to partition [1, · · · , n] and Receive [C (`) 1 , · · · , C (`) n ] 30 end 31 end 32 G← AllReduce(Gi) . synchronize model gradient 33 Wt ←Wt−1 − η ·G . update model 34 end 35 return WT
parallel. Inevitably, staleness is introduced in the deferred communication and results in a mixture usage of fresh inner features/gradients and staled boundary features/gradients.
Analytically, PipeGCN is achieved by modifying Equ. 1. For instance, when using a mean aggregator, Equ. 1 and its corresponding backward formulation in PipeGCN become:
z(t,`)v = MEAN ( {h(t,`−1)u | u ∈ N (v) \ B(v)} ∪ {h(t−1,`−1)u | u ∈ B(v)} ) (3)
δ (t,`) hu
= ∑
v:u∈N (v)\B(v)
1
dv · δ(t,`+1)zv + ∑ v:u∈B(v) 1 dv · δ(t−1,`+1)zv (4)
where B(v) is node v’s boundary node set, dv denotes node v’s degree, and δ(t,`)hu and δ (t,`) zv represent the gradient approximation of hu and zv at layer ` and iteration t, respectively. Lastly, the implementation of PipeGCN are outlined in Alg. 1.
3.3 PIPEGCN’S CONVERGENCE GUARANTEE
As PipeGCN adopts a mixture usage of fresh inner features/gradients and staled boundary features/gradients, its convergence rate is still unknown. We have proved the convergence of PipeGCN and present the convergence property in the following theorem. Theorem 3.1 (Convergence of PipeGCN, informal version). There exists a constant E such that for any arbitrarily small constant ε > 0, we can choose a learning rate η = √ ε E and number of training iterations T = (L(θ(1))− L(θ∗))Eε− 32 such that:
1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ O(ε)
where L(·) is the loss function, θ(t) and θ∗ represent the parameter vector at iteration t and the optimal parameter respectively.
Therefore, the convergence rate of PipeGCN is O(T− 23 ), which is better than sampling-based method (O(T− 12 )) (Chen et al., 2018; Cong et al., 2021) and close to full-graph training (O(T−1)). The formal version of the theorem and our detailed proof can be founded in Appendix A.
3.4 THE PROPOSED SMOOTHING METHOD
To further improve the convergence of PipeGCN, we propose a smoothing method to reduce errors incurred by stale features/feature gradients at a minimal overhead. Here we present the smoothing of feature gradients, and the same formulation also applies to features. To improve the approximate gradients for each feature, fluctuations in feature gradients between adjacent iterations should be reduced. Therefore, we apply a light-weight moving average to the feature gradients of each boundary node v as follow:
δ̂(t,`)zv = γδ̂ (t−1,`) zv + (1− γ)δ (t,`) zv
where δ̂(t,`)zv is the smoothed feature gradient at layer ` and iteration t, and γ is the decay rate. When integrating this smoothed feature gradient method into the backward pass, Equ. 4 can be rewritten as:
δ̂ (t,`) hu
= ∑
v:u∈N (v)\B(v)
1
dv · δ(t,`+1)zv + ∑ v:u∈B(v) 1 dv · δ̂(t−1,`+1)zv
Note that the smoothing of stale features and gradients can be independently applied to PipeGCN.
4 EXPERIMENT RESULTS
We evaluate PipeGCN on four large-scale datasets, Reddit (Hamilton et al., 2017), ogbn-products (Hu et al., 2020), Yelp (Zeng et al., 2020), and ogbn-papers100M (Hu et al., 2020). More details are provided in Tab. 3. To ensure robustness and reproducibility, we fix (i.e., do not tune) the hyperparameters and settings for PipeGCN and its variants throughout all experiments. To implement partition parallelism (for both vanilla distributed GCN training and PipeGCN), the widely used METIS (Karypis & Kumar, 1998) partition algorithm is adopted for graph partition with its objective set to minimize the communication volume. We implement PipeGCN in PyTorch (Paszke et al., 2019) and DGL (Wang et al., 2019). Experiments are conducted on a machine with 10 RTX-2080Ti (11GB), Xeon 6230R@2.10GHz (187GB), and PCIe3x16 connecting CPU-GPU and GPU-GPU. Only for ogbn-papers100M, we use 4 compute nodes (each contains 8 MI60 GPUs, an AMD EPYC 7642 CPU, and 48 lane PCI 3.0 connecting CPU-GPU and GPU-GPU) networked with 10Gbps Ethernet. To support full-graph GCN training with the model sizes in Tab. 3, the minimum required partition numbers are 2, 3, 5, 32 for Reddit, ogbn-products, Yelp, and ogbn-papers100M, respectively.
For convenience, we here name all methods: vanilla partition-parallel training of GCNs (GCN), PipeGCN with feature gradient smoothing (PipeGCN-G), PipeGCN with feature smoothing (PipeGCN-F), and PipeGCN with both smoothing (PipeGCN-GF). The default decay rate γ for all smoothing methods is set to 0.95.
4.1 IMPROVING TRAINING THROUGHPUT OVER FULL-GRAPH TRAINING METHODS
Fig. 3 compares the training throughput between PipeGCN and the SOTA full-graph training methods (ROC (Jia et al., 2020) and CAGNET (Tripathy et al., 2020)). We observe that both vanilla partitionparallel training (GCN) and PipeGCN greatly outperform ROC and CAGNET across different number of partitions, because they avoid both the expensive CPU-GPU swaps (ROC) and the redundant node broadcast (CAGNET). Specifically, GCN is 3.1×∼16.4× faster than ROC and 2.1×∼10.2× faster than CAGNET (c=2). PipeGCN further improves upon GCN, achieving a throughput improvement of 5.6×∼28.5× over ROC and 3.9×∼17.7× over CAGNET (c=2)2. Note that we are not able to compare PipeGCN with NeuGraph (Ma et al., 2019), AliGraph (Zhu et al., 2019), and P 3 (Gandhi & Iyer, 2021) as their code are not publicly available. Besides, Dorylus (Thorpe et al., 2021) is not comparable, as it is not for regular GPU servers. Considering the substantial performance gap between ROC/CAGNET and GCN, we focus on comparing GCN with PipeGCN for the reminder of the section.
4.2 IMPROVING TRAINING THROUGHPUT WITHOUT COMPROMISING ACCURACY
We compare the training performance of both test score and training throughput between GCN and PipeGCN in Tab. 4. We can see that PipeGCN without smoothing already achieves a comparable test score with the vanilla GCN training on both Reddit and Yelp, and incurs only a negligible accuracy drop (-0.08%∼-0.23%) on ogbn-products, while boosting the training throughput by 1.72×∼2.16× across all datasets and different number of partitions3, thus validating the effectiveness of PipeGCN.
With the proposed smoothing method plugged in, PipeGCN-G/F/GF is able to compensate the dropped score of vanilla PipeGCN, achieving an equal or even better test score as/than the vanilla GCN training (without staleness), e.g., 97.14% vs. 97.11% on Reddit, 79.36% vs. 79.14% on ogbn-products and 65.28% vs. 65.26% on Yelp. Meanwhile, PipeGCN-G/F/GF enjoys a similar throughput improvement as vanilla PipeGCN, thus validating the negligible overhead of the proposed smoothing method. Therefore, pipelined transfer of features and gradients greatly improves the training throughput while maintaining the full-graph accuracy.
Note that our distributed GCN training methods consistently achieve higher test scores than SOTA sampling-based methods for GraphSAGE-based models reported in (Zeng et al., 2020) and (Hu et al., 2020), confirming that full-graph training is preferred to obtain better GCN models. For example, the best sampling-based method achieves a 96.6% accuracy on Reddit (Zeng et al., 2020) while full-graph GCN training achieves 97.1%, and PipeGCN improves the accuracy by 0.28% over sampling-based GraphSAGE models on ogbn-products (Hu et al., 2020). This advantage of full-graph training is also validated by recent works (Jia et al., 2020; Tripathy et al., 2020; Liu et al., 2022; Wan et al., 2022).
2More detailed comparisons among full-graph training methods can be found in Appendix B. 3More details regarding PipeGCN’s advantages in training throughput can be found in Appendix C.
4.3 MAINTAINING CONVERGENCE SPEED
To understand PipeGCN’s influence on the convergence speed, we compare the training curve among different methods in Fig. 4. We observe that the convergence of PipeGCN without smoothing is still comparable with that of the vanilla GCN training, although PipeGCN converges slower at the early phase of training and then catches up at the later phase, due to the staleness of boundary features/gradients. With the proposed smoothing methods, PipeGCN-G/F boosts the convergence substantially and matches the convergence speed of vanilla GCN training. There is no clear difference between PipeGCN-G and PipeGCN-F. Lastly, with combined smoothing of features and gradients, PipeGCN-GF can acheive the same or even slightly better convergence speed as vanilla GCN training (e.g., on Reddit) but can overfit gradually similar to the vanilla GCN training, which is further investigated in Sec. 4.4. Therefore, PipeGCN maintains the convergence speed w.r.t the number of epochs while reduces the end-to-end training time by around 50% thanks to its boosted training throughput (see Tab. 4).
4.4 BENEFIT OF STALENESS SMOOTHING METHOD
Error Reduction and Convergence Speedup. To understand why the proposed smoothing technique (Sec. 3.4) speeds up convergence, we compare the error incurred by the stale communication between PipeGCN and PipeGCN-G/F. The error is calculated as the Frobenius-norm of the gap between the correct gradient/feature and the stale gradient/feature used in PipeGCN training. Fig. 5 compares the error at each GCN layer. We can see that the proposed smoothing technique (PipeGCNG/F) reduces the error of staleness substantially (from the base version of PipeGCN) and this benefit consistently holds across different layers in terms of both feature and gradient errors, validating the effectiveness of our smoothing method and explaining its improvement to the convergence speed.
Overfitting Mitigation. To understand the effect of staleness smoothing on model overfitting, we also evaluate the test-accuracy convergence under different decay rates γ in Fig. 6. Here ogbnproducts is adopted as the study case because the distribution of its test set largely differs from that of its training set. From Fig. 6, we observe that smoothing with a large γ (0.7/0.95) offers a fast convergence, i.e., close to the vanilla GCN training, but overfits rapidly. To understand this issue, we
further provide detailed comparisons of the errors incurred under different γ in Fig. 7. We can see that a larger γ enjoys lower approximation errors and makes the gradients/features more stable, thus improving the convergence speed. The increased stability on the training set, however, constrains the model from exploring a more general minimum point on the test set, thus leading to overfitting as the vanilla GCN training. In contrast, a small γ (0 ∼ 0.5) mitigates this overfitting and achieves a better accuracy (see Fig. 6). But a too-small γ (e.g., 0) gives a high error for both stale features and gradients (see Fig. 7), thus suffering from a slower convergence. Therefore, a trade-off between convergence speed and achievable optimality exists between different smoothing decay rates, and γ = 0.5 combines the best of both worlds in this study.
4.5 SCALING LARGE GRAPH TRAINING OVER MULTIPLE SERVERS
To further test the capability of PipeGCN, we scale up the graph size to ogbn-papers100M and train GCN over multiple GPU servers with 32 GPUs. Tab. 5 shows that even at such a large-scale setting where communication overhead dominates, PipeGCN still reduce communication time by 61%, leading to a total training time reduction of 38% compared to the vanilla GCN baseline 4.
5 CONCLUSION
In this work, we propose a new method, PipeGCN, for efficient full-graph GCN training. PipeGCN pipelines communication with computation in distributed GCN training to hide the prohibitive communication overhead. More importantly, we are the first to provide convergence analysis for GCN training with both stale features and feature gradients, and further propose a light-weight smoothing method for convergence speedup. Extensive experiments validate the advantages of PipeGCN over both vanilla GCN training (without staleness) and state-of-the-art full-graph training.
4More experiments on multi-server training can be found in Appendix E.
6 ACKNOWLEDGEMENT
The work is supported by the National Science Foundation (NSF) through the MLWiNS program (Award number: 2003137), the CC∗ Compute program (Award number: 2019007), and the NeTS program (Award number: 1801865).
A CONVERGENCE PROOF
In this section, we prove the convergence of PipeGCN. Specifically, we first figure out that when the model is updated via gradient descent, the change of intermediate features and their gradients are bounded by a constant which is proportional to learning rate η under standard assumptions. Based on this, we further demonstrate that the error occurred by the staleness is proportional to η, which guarantees that the gradient error is bounded by ηE where E is defined in Corollary A.10, and thus PipeGCN converges in O(ε− 32 ) iterations.
A.1 NOTATIONS AND ASSUMPTIONS
For a given graph G = (V, E) with an adjacency matrixA, feature matrixX , we define the propagation matrix P as P := D̃−1/2ÃD̃−1/2, where à = A+ I, D̃u,u = ∑ v Ãu,v. One GCN layer performs one step of feature propagation (Kipf & Welling, 2016) as formulated below
H(0) = X
Z(`) = PH(`−1)W (`)
H(`) = σ(Z(`))
where H(`), W (`), and Z(`) denote the embedding matrix, the trainable weight matrix, and the intermediate embedding matrix in the `-th layer, respectively, and σ denotes a non-linear activation function. For an L-layer GCN, the loss function is denoted by L(θ) where θ = vec[W (1),W (2), · · · ,W (L)]. We define the `-th layer as a function f (`)(·, ·).
f (`)(H(`−1),W (`)) := σ(PH(`−1)W (`))
Its gradient w.r.t. the input embedding matrix can be represented as
J (`−1) = ∇Hf (`)(J (`), H(`−1),W (`)) := P>M (`)[W (`)]>
and its gradient w.r.t. the weight can be represented as
G(`) = ∇W f (`)(J (`), H(`−1),W (`)) := [PH(`−1)]>M (`)
where M (`) = J (`) ◦ σ′(PH(`−1)W (`)) and ◦ denotes Hadamard product. For partition-parallel training, we can split P into two parts P = Pin+Pbd where Pin represents intrapartition propagation and Pbd denotes inter-partition propagation. For PipeGCN, we can represent one GCN layer as below
H̃(t,0) = X
Z̃(t,`) = PinH̃ (t,`−1)W̃ (t,`) + PbdH̃ (t−1,`−1)W̃ (t,`)
H̃(t,`) = σ(Z̃(t,`))
where t is the epoch number and W̃ (t,`) is the weight at epoch t layer `. We define the loss function for this setting as L̃(θ̃(t)) where θ̃(t) = vec[W̃ (t,1), W̃ (t,2), · · · , W̃ (t,L)]. We can also summarize the layer as a function f̃ (t,`)(·, ·)
f̃ (t,`)(H̃(t,`−1), W̃ (t,`)) := σ(PinH̃ (t,`−1)W̃ (t,`) + PbdH̃ (t−1,`−1)W̃ (t,`))
Note that H̃(t−1,`−1) is not a part of the input of f̃ (t,`)(·, ·) because it is a constant for the t-th epoch. The corresponding backward propagation follows the following computation
J̃ (t,`−1) = ∇H f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`))
G̃(t,`) = ∇W f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) where
M̃ (t,`) = J̃ (t,`) ◦ σ′(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))
∇H f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) := P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]>
∇W f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) := [PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`)
Again, J̃ (t−1,`) is not a part of the input of∇H f̃ (t,`)(·, ·, ·) or∇W f̃ (t,`)(·, ·, ·) because it is a constant for epoch t. Finally, we define∇L̃(θ̃(t)) = vec[G̃(t,1), G̃(t,2), · · · , G̃(t,L)]. It should be highlighted that the ‘gradient’ ∇H f̃ (t,`)(·, ·, ·), ∇W f̃ (t,`)(·, ·, ·) and ∇L̃(θ̃(t)) are not the standard gradient for the corresponding forward process due to the stale communication. Properties of gradient cannot be directly applied to these variables.
Before proceeding our proof, we make the following standard assumptions about the adopted GCN architecture and input graph. Assumption A.1. The loss function Loss(·, ·) is Closs-Lipschitz continuous and Lloss-smooth w.r.t. to the input node embedding vector, i.e., |Loss(h(L), y)− Loss(h′(L), y)| ≤ Closs‖h(L) − h′(L)‖2 and ‖∇Loss(h(L), y)−∇Loss(h′(L), y)‖2 ≤ Lloss‖h(L) − h′(L)‖2 where h is the predicted label and y is the correct label vector.
Assumption A.2. The activation function σ(·) is Cσ-Lipschitz continuous and Lσ-smooth, i.e., ‖σ(z(`))− σ(z′(`))‖2 ≤ Cσ‖z(`) − z′(`)‖2 and ‖σ′(z(`))− σ′(z′(`))‖2 ≤ Lσ‖z(`) − z′(`)‖2. Assumption A.3. For any ` ∈ [L], the norm of weight matrices, the propagation matrix, and the input feature matrix are bounded: ‖W (`)‖F ≤ BW , ‖P‖F ≤ BP , ‖X‖F ≤ BX . (This generic assumption is also used in (Chen et al., 2018; Liao et al., 2020; Garg et al., 2020; Cong et al., 2021).)
A.2 BOUNDED MATRICES AND CHANGES
Lemma A.1. For any ` ∈ [L], the Frobenius norm of node embedding matrices, gradient passing from the `-th layer node embeddings to the (`− 1)-th, gradient matrices are bounded, i.e.,
‖H(`)‖F , ‖H̃(t,`)‖F ≤ BH ,
‖J (`)‖F , ‖J̃ (t,`)‖F ≤ BJ ,
‖M (`)‖F , ‖M̃ (t,`)‖F ≤ BM ,
‖G(`)‖F , ‖G̃(t,`)‖F ≤ BG where
BH = max 1≤`≤L
(CσBPBW ) `BX
BJ = max 2≤`≤L
(CσBPBW ) L−`Closs
BM = CσBJ
BG = BPBHBM
Proof. The proof of ‖H(`)‖F ≤ BH and ‖J (`)‖F ≤ BJ can be found in Proposition 1 in (Cong et al., 2021). By induction,
‖H̃(t,`)‖F = ‖σ(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))‖F ≤ CσBW ‖Pin + Pbd‖F (CσBPBW )`−1BX ≤ (CσBPBW )`BX
‖J̃ (t,`−1)‖F = ∥∥∥P>in (J̃ (t,`) ◦ σ′(Z̃(t,`))) [W̃ (t,`)]> + P>bd (J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))) [W̃ (t−1,`)]>∥∥∥
F
≤ CσBW ‖Pin + Pbd‖F (CσBPBW )L−`Closs ≤ (CσBPBW )L−`+1Closs
‖M (`)‖F = ‖J (`) ◦ σ′(Z(`))‖F ≤ CσBJ ‖M̃ (t,`)‖F = ‖J̃ (t,`) ◦ σ′(Z̃(t,`))‖F ≤ CσBJ
G(`) = [PH(`−1)]>M (`)
≤ BPBHBM
G̃(t,`) = [PinH̃ (t,`−1) + PbdH̃ (t−1,`−1)]>M̃ (t,`)
≤ BPBHBM
Because the gradient matrices are bounded, the weight change is bounded.
Corollary A.2. For any t, `, ‖W̃ (t,`) − W̃ (t−1,`)‖F ≤ B∆W = ηBG where η is the learning rate.
Now we can analyze the changes of intermediate variables.
Lemma A.3. For any t, `, we have ‖Z̃(t,`) − Z̃(t−1,`)‖F ≤ B∆Z , ‖H̃(t,`) − H̃(t−1,`)‖F ≤ B∆H , where B∆Z = L−1∑ i=0 CiσB i+1 P B i WBHB∆W and B∆H = CσB∆Z .
Proof. When ` = 0, ‖H̃(t,0)− H̃(t−1,0)‖F = ‖X−X‖F = 0. Now we consider ` > 0 by induction.
‖Z̃(t,`) − Z̃(t−1,`)‖F =‖(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))
− (PinH̃(t−1,`−1)W̃ (t−1,`) + PbdH̃(t−2,`−1)W̃ (t−1,`))‖F =‖Pin(H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`))
+ Pbd(H̃ (t−1,`−1)W̃ (t,`) − H̃(t−2,`−1)W̃ (t−1,`))‖F
Then we analyze the bound of ‖H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`)‖F which is denoted by s(t,`).
s(t,`) ≤ ‖H̃(t,`−1)W̃ (t,`) − H̃(t,`−1)W̃ (t−1,`)‖F + ‖H̃(t,`−1)W̃ (t−1,`) − H̃(t−1,`−1)W̃ (t−1,`)‖F ≤ BH‖W̃ (t,`) − W̃ (t−1,`)‖F +BW ‖H̃(t,`−1) − H̃(t−1,`−1)‖F
According to Corollary A.2, ‖W̃ (t,`) − W̃ (t−1,`)‖F ≤ B∆W . By induction, ‖H̃(t,`−1) − H̃(t−1,`−1)‖F ≤ `−2∑ i=0 Ci+1σ B i+1 P B i WBHB∆W . Combining these inequalities,
s(t,`) ≤ BHB∆W + `−1∑ i=1 CiσB i PB i WBHB∆W
Plugging it back, we have
‖Z̃(t,`) − Z̃(t−1,`)‖F ≤‖Pin(H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`))
+ Pbd(H̃ (t−1,`−1)W̃ (t,`) − H̃(t−2,`−1)W̃ (t−1,`))‖F
≤BP ( BHB∆W +
`−1∑ i=1 CiσB i PB i WBHB∆W
)
= `−1∑ i=0 CiσB i+1 P B i WBHB∆W
‖H̃(t,`) − H̃(t−1,`)‖F =‖σ(Z̃(t,`))− σ(Z̃(t−1,`))‖F ≤Cσ‖Z̃(t,`) − Z̃(t−1,`)‖F ≤CσB∆Z
Lemma A.4. ‖J̃ (t,`) − J̃ (t−1,`)‖F ≤ B∆J where
B∆J = max 2≤`≤L
(BPBWCσ) L−`B∆HLloss + (BMB∆W + LσBJB∆ZBW ) L−3∑ i=0 Bi+1P B i WC i σ
Proof. For the last layer (` = L), ‖J̃ (t,L)− J̃ (t−1,L)‖F ≤ Lloss‖H̃(t,L)− H̃(t−1,L)‖F ≤ LlossB∆H . For the case of ` < L, we prove the lemma by using induction.
‖J̃ (t,`−1) − J̃ (t−1,`−1)‖F = ∥∥∥(P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]>) − ( P>inM̃ (t−1,`)[W̃ (t−1,`)]> + P>bdM̃ (t−2,`)[W̃ (t−2,`)]> )∥∥∥ F
≤ ∥∥∥P>in (M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>)∥∥∥
F + ∥∥∥P>bd (M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−2,`)[W̃ (t−2,`)]>)∥∥∥
F
We denote ∥∥∥M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>∥∥∥
F by s(t,`) and analyze its bound.
s(t,`) ≤ ∥∥∥M̃ (t,`)[W̃ (t,`)]> − M̃ (t,`)[W̃ (t−1,`)]>∥∥∥
F + ∥∥∥M̃ (t,`)[W̃ (t−1,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>∥∥∥
F ≤BM ∥∥∥[W̃ (t,`)]> − [W̃ (t−1,`)]>∥∥∥
F +BW ∥∥∥M̃ (t,`) − M̃ (t−1,`)∥∥∥ F
According to Corollary A.2, ∥∥∥[W̃ (t,`)]> − [W̃ (t−1,`)]>∥∥∥
F ≤ B∆W . For the second term,
‖M̃ (t,`) − M̃ (t−1,`)‖F =‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))‖F ≤‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t,`) ◦ σ′(Z̃(t−1,`))‖F + ‖J̃ (t,`) ◦ σ′(Z̃(t−1,`))− J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))‖F ≤BJ‖σ′(Z̃(t,`))− σ′(Z̃(t−1,`))‖F + Cσ‖J̃ (t,`) − J̃ (t−1,`)‖F (5)
According to the smoothness of σ and Lemma A.3, ‖σ′(Z̃(t,`)) − σ′(Z̃(t−1,`))‖F ≤ LσB∆Z . By induction,
‖J̃ (t,`) − J̃ (t−1,`)‖F
≤ (BPBWCσ)(L−`)B∆HLloss + (BMB∆W + LσBJB∆ZBW ) L−`−1∑ i=0 Bi+1P B i WC i σ
As a result,
s(t,`) ≤BMB∆W +BWBJLσB∆Z +BWCσ‖J̃ (t,`) − J̃ (t−1,`)‖F =(BMB∆W +BWBJLσB∆Z) +B (L−`) P B (L−`+1) W C (L−`+1) σ B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=1 BiPB i WC i σ
≤B(L−`)P B (L−`+1) W C (L−`+1) σ B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=0 BiPB i WC i σ
‖J̃ (t,`−1) − J̃ (t−1,`−1)‖F = ∥∥∥P>in (M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>)∥∥∥
F + ∥∥∥P>bd (M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−2,`)[W̃ (t−2,`)]>)∥∥∥
F
≤BP s(t,`)
≤(BPBWCσ)(L−`+1)B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=0 Bi+1P B i WC i σ
From Equation 5, we can also conclude that
Corollary A.5. ‖M̃ (t,`) − M̃ (t−1,`)‖F ≤ B∆M with B∆M = BJLσB∆Z + CσB∆J .
A.3 BOUNDED FEATURE ERROR AND GRADIENT ERROR
In this subsection, we compare the difference between generic GCN and PipeGCN with the same parameter set, i.e., θ = θ̃(t).
Lemma A.6. ‖Z̃(t,`)−Z(`)‖F ≤ EZ ,‖H̃(t,`)−H(`)‖F ≤ EH whereEZ = B∆H L∑ i=1 Ci−1σ B i WB i P
and EH = B∆H L∑ i=1 (CσBWBP ) i.
Proof.
‖Z̃(t,`) − Z(`)‖F = ‖(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))− (PH(`−1)W (`))‖F ≤ ‖(PinH̃(t,`−1) + PbdH̃(t−1,`−1) − PH(`−1))W (`)‖F = BW ‖P (H̃(t,`−1) −H(`−1)) + Pbd(H̃(t−1,`−1) − H̃(t,`−1))‖F
≤ BWBP ( ‖H̃(t,`−1) −H(`−1)‖F +B∆H ) By induction, we assume that ‖H̃(t,`−1) −H(`−1)‖F ≤ B∆H
`−1∑ i=1 (CσBWBP ) i. Therefore,
‖Z̃(t,`) − Z(`)‖F ≤ BWBPB∆H `−1∑ i=0 (CσBWBP ) i
= B∆H ∑̀ i=1 Ci−1σ B i WB i P
‖H̃(t,`) −H(`)‖F = ‖σ(Z̃(t,`))− σ(Z(`))‖F ≤ Cσ‖Z̃(t,`) − Z(`)‖F
≤ B∆H ∑̀ i=1 (CσBWBP ) i
Lemma A.7. ‖J̃ (t,`) − J (`)‖F ≤ EJ and ‖M̃ (t,`) −M (`)‖F ≤ EM with
EJ = max 2≤`≤L
(BPBWCσ) L−`LlossEH+BP (BW (BJEZLσ+B∆M )+B∆WBM ) L−3∑ i=0 (BPBWCσ) i
EM = CσEJ + LσBJEZ
Proof. When ` = L, ‖J̃ (t,L) − J (L)‖F ≤ LlossEH . For any `, we assume that
‖J̃ (t,`) − J (`)‖F ≤ (BPBWCσ)L−`LlossEH + U L−`−1∑ i=0 (BPBWCσ) i (6)
‖M̃ (t,`) −M (`)‖F ≤ (BPBWCσ)L−`CσLlossEH + UCσ L−`−1∑ i=0 (BPBWCσ) i + LσBJEZ (7)
where U = BP (BWBJEZLσ +B∆WBM +BWB∆M ). We prove them by induction as follows.
‖M̃ (t,`) −M (`)‖F = ‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J (`) ◦ σ′(Z(`))‖F ≤ ‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t,`) ◦ σ′(Z(`))‖F + ‖J̃ (t,`) ◦ σ′(Z(`))− J (`) ◦ σ′(Z(`))‖F ≤ BJ‖σ′(Z̃(t,`))− σ′(Z(`))‖F + Cσ‖J̃ (t,`) − J (`)‖F
Here ‖σ′(Z̃(t,`))− σ′(Z(`))‖F ≤ LσEZ . With Equation 6,
‖M̃ (t,`) −M (`)‖F ≤ (BPBWCσ)L−`CσLlossEH + UCσ L−`−1∑ i=0 (BPBWCσ) i + LσBJEZ
On the other hand, ‖J̃ (t,`−1) − J (`−1)‖F = ‖P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]> − P>M (`)[W (`)]>‖F = ‖P>(M̃ (t,`) −M (`))[W (`)]> + P>bd(M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>)‖F ≤ ‖P>(M̃ (t,`) −M (`))[W (`)]>‖F + ‖P>bd(M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>)‖F ≤ BPBW ‖M̃ (t,`) −M (`)‖F +BP ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F
The first part is bounded by Equation 7. For the second part,
‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F ≤ ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−1,`)[W̃ (t,`)]>‖F + ‖M̃ (t−1,`)[W̃ (t,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F ≤ B∆WBM +BWB∆M
Therefore, ‖J̃ (t,`−1) − J (`−1)‖F ≤ BPBW ‖M̃ (t,`) −M (`)‖F +BP ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F
≤ (BPBWCσ)L−`+1LlossEH + U L−∑̀ i=1 (BPBWCσ) i + U
= (BPBWCσ) L−`+1LlossEH + U L−∑̀ i=0 (BPBWCσ) i
Lemma A.8. ‖G̃(t,`) −G(`)‖F ≤ EG where EG = BP (BHEM +BMEH)
Proof.
‖G̃(t,`) −G(`)‖F = ∥∥∥[PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`) − [PH(`)]>M (`)∥∥∥
F ≤ ∥∥∥[PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`) − [PH(`−1)]>M̃ (t,`)∥∥∥
F + ∥∥∥[PH(`−1)]>M̃ (t,`) − [PH(`−1)]>M (`)∥∥∥
F
≤BM (‖P (H̃(t,`−1) −H(`−1)) + Pbd(H̃(t−1,`−1) − H̃(t,`−1))‖F ) +BPBHEM ≤BMBP (EH +B∆H) +BPBHEM
By summing up from ` = 1 to ` = L to both sides, we have
Corollary A.9. ‖∇L̃(θ)−∇L(θ)‖2 ≤ Eloss where Eloss = LEG.
According to the derivation of Eloss, we observe that Eloss contains a factor η. To simplify the expression of Eloss, we assume that BPBWCσ ≤ 12 without loss of generality, and rewrite Corollary A.9 as the following.
Corollary A.10. ‖∇L̃(θ)−∇L(θ)‖2 ≤ ηE where
E = 1
8 LB3PB 2 XClossCσ
( 3BXC 2 σLloss + 6BXClossLσ + 10ClossC 2 σ ) A.4 PROOF OF THE MAIN THEOREM
We first introduce a lemma before the proof of our main theorem. Lemma A.11 (Lemma 1 in (Cong et al., 2021)). An L-layer GCN is Lf -Lipschitz smoothness, i.e., ‖∇L(θ1)−∇L(θ2)‖2 ≤ Lf‖θ1 − θ2‖2.
Now we prove the main theorem. Theorem A.12 (Convergence of PipeGCN, formal). Under Assumptions A.1, A.2, and A.3, we can derive the following by choosing a learning rate η = √ ε E and number of training iterations T = (L(θ(1))− L(θ∗))Eε− 32 : 1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ 3ε
where E is defined in Corollary A.10, ε > 0 is an arbitrarily small constant, L(·) is the loss function, θ(t) and θ∗ represent the parameter vector at iteration t and the optimal parameter respectively.
Proof. With the smoothness of the model, L(θ(t+1)) ≤ L(θ(t)) + 〈 ∇L(θ(t)), θ(t+1) − θ(t) 〉 + Lf 2 ‖θ(t+1) − θ(t)‖22
= L(θ(t))− η 〈 ∇L(θ(t)),∇L̃(θ(t)) 〉 + η2Lf 2 ‖∇L̃(θ(t))‖22
Let δ(t) = ∇L̃(θ(t))−∇L(θ(t)) and η ≤ 1/Lf , we have L(θ(t+1)) ≤ L(θ(t))− η 〈 ∇L(θ(t)),∇L(θ(t)) + δ(t) 〉 + η
2 ‖∇L(θ(t)) + δ(t)‖22
≤ L(θ(t))− η 2 ‖∇L(θ(t))‖22 + η 2 ‖δ(t)‖22
From Corollary A.10 we know that ‖δ(t)‖2 < ηE. After rearranging the terms,
‖∇L(θ(t))‖22 ≤ 2
η (L(θ(t))− L(θ(t+1))) + η2E2
Summing up from t = 1 to T and taking the average,
1
T T∑ t=1 ‖∇L(θ(t))‖22 ≤ 2 ηT (L(θ(1))− L(θ(T+1))) + η2E2
≤ 2 ηT (L(θ(1))− L(θ∗)) + η2E2
where θ∗ is the minimum point of L(·). By taking η = √ ε E and T = (L(θ
(1))− L(θ∗))Eε− 32 with an arbitrarily small constant ε > 0, we have
1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ 3ε
B TRAINING TIME BREAKDOWN OF FULL-GRAPH TRAINING METHODS
To understand why PipeGCN significantly boosts the training throughput over full-graph training methods, we provide the detailed time breakdown in Tab. 6 using the same model as Tab. 3 (4-layer GraphSAGE, 256 hidden units), in which “GCN” denotes the vanilla partition-parallel training illustrated in Fig. 1(a). We observe that PipeGCN greatly saves communication time.
C TRAINING TIME IMPROVEMENT BREAKDOWN OF PIPEGCN
To understand the training time improvement offered by PipeGCN, we further breakdown the epoch time into three parts (intra-partition computation, inter-partition communication, and reduce for aggregating model gradient) and provide the result in Fig. 8. We can observe that: 1) interpartition communication dominates the training time in vanilla partition-parallel training (GCN); 2) PipeGCN (with or without smoothing) greatly hides the communication overhead across different number of partitions and all datasets, e.g., the communication time is hidden completely in 2-partition Reddit and almost completely in 3-partition Yelp, thus the substantial reduction in training time; and 3) the proposed smoothing incurs only minimal overhead (i.e., minor difference between PipeGCN and PipeGCN-GF). Lastly, we also notice that when communication ratio is extremely large (85%+), PipeGCN hides communication significantly but not completely (e.g., 10-partition ogbn-products), in which case we can employ those compression and quantization techniques (Alistarh et al. (2017); Seide et al. (2014); Wen et al. (2017); Li et al. (2018a); Yu et al. (2018)) from the area of general distributed SGD for further reducing the communication, as the compression is orthogonal to the pipeline method. Besides compression, we can also increase the pipeline depth of PipeGCN, e.g., using two iterations of compute to hide one iteration of communication, which is left to our future work.
D MAINTAINING CONVERGENCE SPEED (ADDITIONAL EXPERIMENTS)
We provide the additional convergence curves on Yelp in Fig. 9. We can see that PipeGCN and its variants maintain the convergence speed w.r.t the number of epochs while substantially reducing the end-to-end training time.
E SCALING GCN TRAINING OVER MULTIPLE GPU SERVERS
We also scale up PipeGCN training over multiple GPU servers (each contains AMD Radeon Instinct MI60 GPUs, an AMD EPYC 7642 CPU, and 48 lane PCI 3.0 connecting CPU-GPU and GPU-GPU) networked with 10Gbps Ethernet.
The accuracy results of PipeGCN and its variants are summarized in Tab. 7:
Furthermore, we provide PipeGCN’s speedup against vanilla partition-parallel training in Tab. 8:
From the two tables above, we can observe that our PipeGCN family consistently maintains the accuracy of the full-graph training, while improving the throughput by 15%∼66% regardless of the machine settings and number of partitions.
F IMPLEMENTATION DETAILS
We discuss the details of the effective and efficient implementation of PipeGCN in this section.
First, for parallel communication and computation, a second cudaStream is required for communication besides the default cudaStream for computation. To also save memory buffers for communication, we batch all communication (e.g., from different layers) into this second cudaStream. When the popular communication backend, Gloo, is used, we parallelize the CPU-GPU transfer with CPU-CPU transfer.
Second, when Dropout layer is used in GCN model, it should be applied after communication. The implementation of the dropout layer for PipeGCN should be considered carefully so that the dropout mask remains consistent for the input tensor and corresponding gradient. If the input feature passes through the dropout layer before being communicated, during the backward phase, the dropout mask is changed and the gradient of masked values is involved in the computation, which introduces noise to the calculation of followup gradients. As a result, the dropout layer can only be applied after receiving boundary features. | 1. What is the focus and contribution of the paper on distributed GCN training?
2. What are the strengths of the proposed approach, particularly in terms of improving training throughput?
3. Do you have any concerns regarding the novelty of the paper and its comparison with other works?
4. How can the paper be improved, especially regarding experimental comparisons and figure illustrations?
5. What is the influence of the smoothing technique on the results, and how can it be better analyzed? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes PipeGCN, which pipelines communication and computation in distributed GCN training to improve training throughput. Analysis is conducted to show the convergence speed when using both stale gradient and feature vectors. Extensive experiments are conducted to show that PipeGCN significantly improves the efficiency of vanilla distributed GCN training without hurting model accuracy.
Review
I think the paper is a well-executed work: the discussions about related works are extensive, the idea behind PipeGCN is clearly explained, the convergence speed of PipeGCN is analyzed theoretically and the experiments are comprehensive. I have some concerns on the novelty of the paper. As mentioned by the authors, Dorylus considered using stale features. To my knowledge, using stale gradients is well studied in Pipe-SGD related works. PipeGCN seems to combine the two ideas by using both stale features and stale gradients. Thus, the core problem is the technical contribution of the analysis of PipeGCN, which I do not have the technical expertise to evaluate. I think the paper can be improved by fixing the following problems.
Experimental comparisons with Dorylus may be included to show the benefits of using stale gradients.
Figure 1 and Figure 2 can be improved. In Figure 1(b), the “…” for Part 2 should be removed and all communication tasks should be explicitly listed (e.g., communicate features, communicate gradients, and note that communicate gradients happen after some computation). In Figure 1(b) and (c), the computation and communication tasks should have iteration counts such that the pipelining idea can be understood. In Figure 2, “communicate for Next L1 backward” happens before the L1 backward of the current iteration, how is that possible? Is it a typo?
The influence of the smoothing technique can be better analyzed. In Table 4, there is not a consistent winner among PipeGCN, PipeGCN-G and PipeGCG-F in model accuracy. |
ICLR | Title
PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Abstract
Graph Convolutional Networks (GCNs) is the state-of-the-art method for learning graph-structured data, and training large-scale GCNs requires distributed training across multiple accelerators such that each accelerator is able to hold a partitioned subgraph. However, distributed GCN training incurs prohibitive overhead of communicating node features and feature gradients among partitions for every GCN layer during each training iteration, limiting the achievable training efficiency and model scalability. To this end, we propose PipeGCN, a simple yet effective scheme that hides the communication overhead by pipelining inter-partition communication with intra-partition computation. It is non-trivial to pipeline for efficient GCN training, as communicated node features/gradients will become stale and thus can harm the convergence, negating the pipeline benefit. Notably, little is known regarding the convergence rate of GCN training with both stale features and stale feature gradients. This work not only provides a theoretical convergence analysis but also finds the convergence rate of PipeGCN to be close to that of the vanilla distributed GCN training without any staleness. Furthermore, we develop a smoothing method to further improve PipeGCN’s convergence. Extensive experiments show that PipeGCN can largely boost the training throughput (1.7×∼28.5×) while achieving the same accuracy as its vanilla counterpart and existing full-graph training methods. The code is available at https://github.com/RICE-EIC/PipeGCN.
1 INTRODUCTION
Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016) have gained great popularity recently as they demonstrated the state-of-the-art (SOTA) performance in learning graph-structured data (Zhang & Chen, 2018; Xu et al., 2018; Ying et al., 2018). Their promising performance is resulting from their ability to capture diverse neighborhood connectivity. In particular, a GCN aggregates all features from the neighbor node set for a given node, the feature of which is then updated via a multi-layer perceptron. Such a two-step process (neighbor aggregation and node update) empowers GCNs to better learn graph structures. Despite their promising performance, training GCNs at scale is still a challenging problem, as a prohibitive amount of compute and memory resources are required to train a real-world large-scale graph, let alone exploring deeper and more advanced models. To overcome this challenge, various sampling-based methods have been proposed to reduce the resource requirement at a cost of incurring feature approximation errors. A straightforward instance is to create mini-batches by sampling neighbors (e.g., GraphSAGE (Hamilton et al., 2017) and VR-GCN (Chen et al., 2018)) or to extract subgraphs as training samples (e.g., Cluster-GCN (Chiang et al., 2019) and GraphSAINT (Zeng et al., 2020)).
In addition to sampling-based methods, distributed GCN training has emerged as a promising alternative, as it enables large full-graph training of GCNs across multiple accelerators such as GPUs.
This approach first partitions a giant graph into multiple small subgraps, each of which is able to fit into a single GPU, and then train these partitioned subgraphs locally on GPUs together with indispensable communication across partitions. Following this direction, several recent works (Ma et al., 2019; Jia et al., 2020; Tripathy et al., 2020; Thorpe et al., 2021; Wan et al., 2022) have been proposed and verified the great potential of distributed GCN training. P 3 (Gandhi & Iyer, 2021) follows another direction that splits the data along the feature dimension and leverages intra-layer model parallelism for training, which shows superior performance on small models.
In this work, we propose a new method for distributed GCN training, PipeGCN, which targets achieving a full-graph accuracy with boosted training efficiency. Our main contributions are following:
• We first analyze two efficiency bottlenecks in distributed GCN training: the required significant communication overhead and frequent synchronization, and then propose a simple yet effective technique called PipeGCN to address both aforementioned bottlenecks by pipelining inter-partition communication with intra-partition computation to hide the communication overhead.
• We address the challenge raised by PipeGCN, i.e., the resulting staleness in communicated features and feature gradients (neither weights nor weight gradients), by providing a theoretical convergence analysis and showing that PipeGCN’s convergence rate is O(T− 23 ), i.e., close to vanilla distributed GCN training without staleness. To the best of our knowledge, we are the first to provide a theoretical convergence proof of GCN training with both stale feature and stale feature gradients.
• We further propose a low-overhead smoothing method to further improve PipeGCN’s convergence by reducing the error incurred by the staleness.
• Extensive empirical and ablation studies consistently validate the advantages of PipeGCN over both vanilla distributed GCN training and those SOTA full-graph training methods (e.g., boosting the training throughput by 1.7×∼28.5× while achieving the same or a better accuracy).
2 BACKGROUND AND RELATED WORKS
Graph Convolutional Networks. GCNs represent each node in a graph as a feature (embedding) vector and learn the feature vector via a two-step process (neighbor aggregation and then node update) for each layer, which can be mathematically described as:
z(`)v = ζ (`) ({ h(`−1)u | u ∈ N (v) }) (1)
h(`)v = φ (`) ( z(`)v , h (`−1) v ) (2)
where N (v) is the neighbor set of node v in the graph, h(`)v represents the learned embedding vector of node v at the `-th layer, z(`)v is an intermediate aggregated feature calculated by an aggregation function ζ(`), and φ(`) is the function for updating the feature of node v. The original GCN (Kipf & Welling, 2016) uses a weighted average aggregator for ζ(`) and the update function φ(`) is a single-layer perceptron σ(W (`)z(`)v ) where σ(·) is a non-linear activation function and W (`) is a weight matrix. Another famous GCN instance is GraphSAGE (Hamilton et al., 2017) in which φ(`) is σ ( W (`) · CONCAT ( z (`) v , h (`−1) v )) .
Distributed Training for GCNs. A real-world graph can contain millions of nodes and billions of edges (Hu et al., 2020), for which a feasible training approach is to partition it into small subgraphs (to fit into each GPU’s resource), and train them in parallel, during which necessary communication is performed to exchange boundary node features and gradients to satisfy GCNs’s neighbor aggregation (Equ. 1). Such an approach is called vanilla partition-parallel training and is illustrated in Fig. 1 (a). Following this approach, several works have been proposed recently. NeuGraph (Ma et al., 2019), AliGraph (Zhu et al., 2019), and ROC (Jia et al., 2020) perform such partition-parallel training but rely on CPUs for storage for all partitions and repeated swapping of a partial partition to GPUs. Inevitably, prohibitive CPU-GPU swaps are incurred, plaguing the achievable training efficiency. CAGNET (Tripathy et al., 2020) is different in that it splits each node feature vector into tiny sub-vectors which are then broadcasted and computed sequentially, thus requiring redundant communication and frequent synchronization. Furthermore, P 3 (Gandhi & Iyer, 2021) proposes to split both the feature and the GCN layer for mitigating the communication overhead, but it makes a strong assumption that the hidden dimensions of a GCN should be considerably smaller than that of
(a) Vanilla partition-parallel training
2 5
1
6
3
4
Graph
Inner Node Boundary Node
Communicate Boundary Feature & Grad
Pipeline
Communicate Compute ...
... ... Iteration #2Iteration #1
52 61
Part 3 ...
Part 1 ...
4
2
5
1
6 4
Communicate Compute ...
Iteration #2
52 61
...
...
... ...Communicate Compute Compute
Communicate
(b) Timeline of vanilla partition-parallel training (c) PipeGCN
2
6
5
3 Part 2
Iteration #1
Timeline of (a)
3 4 52 3 4 52
... ...
... ...
Timeline of PipeGCN
Part 1 Part 2 Part 3 Partition
1 2
6 5
3
4
Figure 1: An illustrative comparison between vanilla partition-parallel training and PipeGCN.
input features, which restricts the model size. A concurrent work Dorylus (Thorpe et al., 2021) adopts a fine-grained pipeline along each compute operation in GCN training and supports asynchronous usage of stale features. Nevertheless, the resulting staleness of feature gradients is neither analyzed nor considered for convergence proof, let alone error reduction methods for the incurred staleness.
Asynchronous Distributed Training. Many prior works have been proposed for asynchronous distributed training of DNNs. Most works (e.g., Hogwild! (Niu et al., 2011), SSP (Ho et al., 2013), and MXNet (Li et al., 2014)) rely on a parameter server with multiple workers running asynchronously to hide communication overhead of weights/(weight gradients) among each other, at a cost of using stale weight gradients from previous iterations. Other works like Pipe-SGD (Li et al.,
2018b) pipeline such communication with local computation of each worker. Another direction is to partition a large model along its layers across multiple GPUs and then stream in small data batches through the layer pipeline, e.g., PipeDream (Harlap et al., 2018) and PipeMare (Yang et al., 2021). Nonetheless, all these works aim at large models with small data, where communication overhead of model weights/weight gradients are substantial but data feature communications are marginal (if not none), thus not well suited for GCNs. More importantly, they focus on convergence with stale weight gradients of models, rather than stale features/feature gradients incurred in GCN training. Tab. 1 summarizes the differences. In a nutshell, little effort has been made to study asynchronous or pipelined distributed training of GCNs, where feature communication plays the major role, let alone the corresponding theoretical convergence proofs.
GCNs with Stale Features/Feature Gradients. Several recent works have been proposed to adopt either stale features (Chen et al., 2018; Cong et al., 2020) or feature gradients (Cong et al., 2021) in single-GPU training of GCNs. Nevertheless, their convergence analysis considers only one of two kinds of staleness and derives a convergence rate of O(T− 12 ) for pure sampling-based methods. This is, however, limited in distributed GCN training as its convergence is simultaneously affected by both kinds of staleness. PipeGCN proves such convergence with both stale features and feature gradients and offers a better rate of O(T− 23 ). Furthermore, none of previous works has studied the errors incurred by staleness which harms the convergence speed, while PipeGCN develops a low-overhead smoothing method to reduce such errors.
3 THE PROPOSED PIPEGCN FRAMEWORK
Overview. To enable efficient distributed GCN training, we first identify the two bottlenecks associated with vanilla partition-parallel training: substantial communication overhead and frequently synchronized communication (see Fig. 1(b)), and then address them directly by proposing a novel strategy, PipeGCN, which pipelines the communication and computation stages across two adjacent iterations in each partition of distributed GCN training for breaking the synchrony and then hiding the communication latency (see Fig. 1(c)). It is non-trivial to achieve efficient GCN training with such a pipeline method, as staleness is incurred in communicated features/feature gradients and
more importantly little effort has been made to study the convergence guarantee of GCN training using stale feature gradients. This work takes an initial effort to prove both the theoretical and empirical convergence of such a pipelined GCN training method, and for the first time shows its convergence rate to be close to that of vanilla GCN training without staleness. Furthermore, we propose a low-overhead smoothing method to reduce the errors due to stale features/feature gradients for further improving the convergence.
3.1 BOTTLENECKS IN VANILLA PARTITION-PARALLEL TRAINING
Significant communication overhead. Fig. 1(a) illustrates vanilla partition-parallel training, where each partition holds inner nodes that come from the original graph and boundary nodes that come from other subgraphs. These boundary nodes are demanded by the neighbor aggregation of GCNs across neighbor partitions, e.g., in Fig. 1(a) node-5 needs nodes-[3,4,6] from other partitions for calculating Equ. 1. Therefore, it is the features/gradients of boundary nodes that dominate the communication overhead in distributed GCN training. Note that the amount of boundary nodes can be excessive and far exceeds the inner nodes, as the
boundary nodes are replicated across partitions and scale with the number of partitions. Besides the sheer size, communication of boundary nodes occurs for (1) each layer and (2) both forward and backward passes, making communication overhead substantial. We evaluate such overhead1 in Tab. 2 and find communication to be dominant, which is consistent with CAGNET (Tripathy et al., 2020).
Frequently synchronized communication. The aforementioned communication of boundary nodes must be finished before calculating Equ. 1 and Equ. 2, which inevitably forces synchronization between communication and computation and requires a fully sequential execution (see Fig. 1(b)). Thus, for most of training time, each partition is waiting for dominant features/gradients communication to finish before the actual compute, repeated for each layer and for both forward and backward passes.
3.2 THE PROPOSED PIPEGCN METHOD
Fig. 1(c) illustrates the high-level overview of PipeGCN, which pipelines the communicate and compute stages spanning two iterations for each GCN layer. Fig. 2 further provides the detailed end-to-end flow, where PipeGCN removes the heavy communication overhead in the vanilla approach by breaking the synchronization between communicate and compute and hiding communicate with compute of each GCN layer. This is achieved by deferring the communicate to next iteration’s compute (instead of serving the current iteration) such that compute and communicate can run in
1The detailed setting can be found in Sec. 4.
Algorithm 1: Training a GCN with PipeGCN (per-partition view). Input: partition id i, partition count n, graph partition Gi, propagation matrix Pi, node feature Xi, label Yi, boundary node set Bi, layer count L, learning rate η, initial model W0 Output: trained model WT after T iterations
1 Vi ← {node v ∈ Gi : v /∈ Bi} . create inner node set 2 Broadcast Bi and Receive [B1, · · · ,Bn] 3 [Si,1, · · · ,Si,n]← [B1 ∩ Vi, · · · ,Bn ∩ Vi] 4 Broadcast Vi and Receive [V1, · · · ,Vn] 5 [S1,i, · · · ,Sn,i]← [Bi ∩ V1, · · · ,Bi ∩ Vn]
6 H(0) ← [ Xi 0 ] . initialize node feature, set boundary feature as 0
7 for t := 1→ T do 8 for ` := 1→ L do . forward pass 9 if t > 1 then 10 wait until thread(`)f completes 11 [H (`−1) S1,i , · · · , H (`−1) Sn,i ]← [B (`) 1 , · · · , B (`) n ] . update boundary feature 12 end 13 with thread(`)f . communicate boundary features in parallel 14 Send [H(`−1)Si,1 , · · · , H (`−1) Si,n ] to partition [1, · · · , n] and Receive [B (`) 1 , · · · , B (`) n ] 15 H (`) Vi ← σ(PiH (`−1)W (`) t−1) . update inner nodes feature 16 end
17 J (L) Vi ←
∂Loss(H (L) Vi ,Yi)
∂H (L) Vi
18 for ` := L→ 1 do . backward pass 19 G
(`) i ← [ PiH (`−1) ]> ( J (`) Vi ◦ σ ′(PiH (`−1)W (`) t−1) ) . calculate weight gradient
20 if ` > 1 then 21 J(`−1) ← P>i ( J (`) Vi ◦ σ ′(PiH (`−1)W (`) t−1) ) [W (`) t−1] > . calculate feature gradient 22 if t > 1 then 23 wait until thread(`)b completes 24 for j := 1→ n do 25 J
(`−1) Si,j ← J (`−1) Si,j + C (`) j . accumulate feature gradient
26 end 27 end 28 with thread(`)b . communicate boundary feature gradient in parallel 29 Send [J(`−1)S1,i , · · · , J (`−1) Sn,i ] to partition [1, · · · , n] and Receive [C (`) 1 , · · · , C (`) n ] 30 end 31 end 32 G← AllReduce(Gi) . synchronize model gradient 33 Wt ←Wt−1 − η ·G . update model 34 end 35 return WT
parallel. Inevitably, staleness is introduced in the deferred communication and results in a mixture usage of fresh inner features/gradients and staled boundary features/gradients.
Analytically, PipeGCN is achieved by modifying Equ. 1. For instance, when using a mean aggregator, Equ. 1 and its corresponding backward formulation in PipeGCN become:
z(t,`)v = MEAN ( {h(t,`−1)u | u ∈ N (v) \ B(v)} ∪ {h(t−1,`−1)u | u ∈ B(v)} ) (3)
δ (t,`) hu
= ∑
v:u∈N (v)\B(v)
1
dv · δ(t,`+1)zv + ∑ v:u∈B(v) 1 dv · δ(t−1,`+1)zv (4)
where B(v) is node v’s boundary node set, dv denotes node v’s degree, and δ(t,`)hu and δ (t,`) zv represent the gradient approximation of hu and zv at layer ` and iteration t, respectively. Lastly, the implementation of PipeGCN are outlined in Alg. 1.
3.3 PIPEGCN’S CONVERGENCE GUARANTEE
As PipeGCN adopts a mixture usage of fresh inner features/gradients and staled boundary features/gradients, its convergence rate is still unknown. We have proved the convergence of PipeGCN and present the convergence property in the following theorem. Theorem 3.1 (Convergence of PipeGCN, informal version). There exists a constant E such that for any arbitrarily small constant ε > 0, we can choose a learning rate η = √ ε E and number of training iterations T = (L(θ(1))− L(θ∗))Eε− 32 such that:
1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ O(ε)
where L(·) is the loss function, θ(t) and θ∗ represent the parameter vector at iteration t and the optimal parameter respectively.
Therefore, the convergence rate of PipeGCN is O(T− 23 ), which is better than sampling-based method (O(T− 12 )) (Chen et al., 2018; Cong et al., 2021) and close to full-graph training (O(T−1)). The formal version of the theorem and our detailed proof can be founded in Appendix A.
3.4 THE PROPOSED SMOOTHING METHOD
To further improve the convergence of PipeGCN, we propose a smoothing method to reduce errors incurred by stale features/feature gradients at a minimal overhead. Here we present the smoothing of feature gradients, and the same formulation also applies to features. To improve the approximate gradients for each feature, fluctuations in feature gradients between adjacent iterations should be reduced. Therefore, we apply a light-weight moving average to the feature gradients of each boundary node v as follow:
δ̂(t,`)zv = γδ̂ (t−1,`) zv + (1− γ)δ (t,`) zv
where δ̂(t,`)zv is the smoothed feature gradient at layer ` and iteration t, and γ is the decay rate. When integrating this smoothed feature gradient method into the backward pass, Equ. 4 can be rewritten as:
δ̂ (t,`) hu
= ∑
v:u∈N (v)\B(v)
1
dv · δ(t,`+1)zv + ∑ v:u∈B(v) 1 dv · δ̂(t−1,`+1)zv
Note that the smoothing of stale features and gradients can be independently applied to PipeGCN.
4 EXPERIMENT RESULTS
We evaluate PipeGCN on four large-scale datasets, Reddit (Hamilton et al., 2017), ogbn-products (Hu et al., 2020), Yelp (Zeng et al., 2020), and ogbn-papers100M (Hu et al., 2020). More details are provided in Tab. 3. To ensure robustness and reproducibility, we fix (i.e., do not tune) the hyperparameters and settings for PipeGCN and its variants throughout all experiments. To implement partition parallelism (for both vanilla distributed GCN training and PipeGCN), the widely used METIS (Karypis & Kumar, 1998) partition algorithm is adopted for graph partition with its objective set to minimize the communication volume. We implement PipeGCN in PyTorch (Paszke et al., 2019) and DGL (Wang et al., 2019). Experiments are conducted on a machine with 10 RTX-2080Ti (11GB), Xeon 6230R@2.10GHz (187GB), and PCIe3x16 connecting CPU-GPU and GPU-GPU. Only for ogbn-papers100M, we use 4 compute nodes (each contains 8 MI60 GPUs, an AMD EPYC 7642 CPU, and 48 lane PCI 3.0 connecting CPU-GPU and GPU-GPU) networked with 10Gbps Ethernet. To support full-graph GCN training with the model sizes in Tab. 3, the minimum required partition numbers are 2, 3, 5, 32 for Reddit, ogbn-products, Yelp, and ogbn-papers100M, respectively.
For convenience, we here name all methods: vanilla partition-parallel training of GCNs (GCN), PipeGCN with feature gradient smoothing (PipeGCN-G), PipeGCN with feature smoothing (PipeGCN-F), and PipeGCN with both smoothing (PipeGCN-GF). The default decay rate γ for all smoothing methods is set to 0.95.
4.1 IMPROVING TRAINING THROUGHPUT OVER FULL-GRAPH TRAINING METHODS
Fig. 3 compares the training throughput between PipeGCN and the SOTA full-graph training methods (ROC (Jia et al., 2020) and CAGNET (Tripathy et al., 2020)). We observe that both vanilla partitionparallel training (GCN) and PipeGCN greatly outperform ROC and CAGNET across different number of partitions, because they avoid both the expensive CPU-GPU swaps (ROC) and the redundant node broadcast (CAGNET). Specifically, GCN is 3.1×∼16.4× faster than ROC and 2.1×∼10.2× faster than CAGNET (c=2). PipeGCN further improves upon GCN, achieving a throughput improvement of 5.6×∼28.5× over ROC and 3.9×∼17.7× over CAGNET (c=2)2. Note that we are not able to compare PipeGCN with NeuGraph (Ma et al., 2019), AliGraph (Zhu et al., 2019), and P 3 (Gandhi & Iyer, 2021) as their code are not publicly available. Besides, Dorylus (Thorpe et al., 2021) is not comparable, as it is not for regular GPU servers. Considering the substantial performance gap between ROC/CAGNET and GCN, we focus on comparing GCN with PipeGCN for the reminder of the section.
4.2 IMPROVING TRAINING THROUGHPUT WITHOUT COMPROMISING ACCURACY
We compare the training performance of both test score and training throughput between GCN and PipeGCN in Tab. 4. We can see that PipeGCN without smoothing already achieves a comparable test score with the vanilla GCN training on both Reddit and Yelp, and incurs only a negligible accuracy drop (-0.08%∼-0.23%) on ogbn-products, while boosting the training throughput by 1.72×∼2.16× across all datasets and different number of partitions3, thus validating the effectiveness of PipeGCN.
With the proposed smoothing method plugged in, PipeGCN-G/F/GF is able to compensate the dropped score of vanilla PipeGCN, achieving an equal or even better test score as/than the vanilla GCN training (without staleness), e.g., 97.14% vs. 97.11% on Reddit, 79.36% vs. 79.14% on ogbn-products and 65.28% vs. 65.26% on Yelp. Meanwhile, PipeGCN-G/F/GF enjoys a similar throughput improvement as vanilla PipeGCN, thus validating the negligible overhead of the proposed smoothing method. Therefore, pipelined transfer of features and gradients greatly improves the training throughput while maintaining the full-graph accuracy.
Note that our distributed GCN training methods consistently achieve higher test scores than SOTA sampling-based methods for GraphSAGE-based models reported in (Zeng et al., 2020) and (Hu et al., 2020), confirming that full-graph training is preferred to obtain better GCN models. For example, the best sampling-based method achieves a 96.6% accuracy on Reddit (Zeng et al., 2020) while full-graph GCN training achieves 97.1%, and PipeGCN improves the accuracy by 0.28% over sampling-based GraphSAGE models on ogbn-products (Hu et al., 2020). This advantage of full-graph training is also validated by recent works (Jia et al., 2020; Tripathy et al., 2020; Liu et al., 2022; Wan et al., 2022).
2More detailed comparisons among full-graph training methods can be found in Appendix B. 3More details regarding PipeGCN’s advantages in training throughput can be found in Appendix C.
4.3 MAINTAINING CONVERGENCE SPEED
To understand PipeGCN’s influence on the convergence speed, we compare the training curve among different methods in Fig. 4. We observe that the convergence of PipeGCN without smoothing is still comparable with that of the vanilla GCN training, although PipeGCN converges slower at the early phase of training and then catches up at the later phase, due to the staleness of boundary features/gradients. With the proposed smoothing methods, PipeGCN-G/F boosts the convergence substantially and matches the convergence speed of vanilla GCN training. There is no clear difference between PipeGCN-G and PipeGCN-F. Lastly, with combined smoothing of features and gradients, PipeGCN-GF can acheive the same or even slightly better convergence speed as vanilla GCN training (e.g., on Reddit) but can overfit gradually similar to the vanilla GCN training, which is further investigated in Sec. 4.4. Therefore, PipeGCN maintains the convergence speed w.r.t the number of epochs while reduces the end-to-end training time by around 50% thanks to its boosted training throughput (see Tab. 4).
4.4 BENEFIT OF STALENESS SMOOTHING METHOD
Error Reduction and Convergence Speedup. To understand why the proposed smoothing technique (Sec. 3.4) speeds up convergence, we compare the error incurred by the stale communication between PipeGCN and PipeGCN-G/F. The error is calculated as the Frobenius-norm of the gap between the correct gradient/feature and the stale gradient/feature used in PipeGCN training. Fig. 5 compares the error at each GCN layer. We can see that the proposed smoothing technique (PipeGCNG/F) reduces the error of staleness substantially (from the base version of PipeGCN) and this benefit consistently holds across different layers in terms of both feature and gradient errors, validating the effectiveness of our smoothing method and explaining its improvement to the convergence speed.
Overfitting Mitigation. To understand the effect of staleness smoothing on model overfitting, we also evaluate the test-accuracy convergence under different decay rates γ in Fig. 6. Here ogbnproducts is adopted as the study case because the distribution of its test set largely differs from that of its training set. From Fig. 6, we observe that smoothing with a large γ (0.7/0.95) offers a fast convergence, i.e., close to the vanilla GCN training, but overfits rapidly. To understand this issue, we
further provide detailed comparisons of the errors incurred under different γ in Fig. 7. We can see that a larger γ enjoys lower approximation errors and makes the gradients/features more stable, thus improving the convergence speed. The increased stability on the training set, however, constrains the model from exploring a more general minimum point on the test set, thus leading to overfitting as the vanilla GCN training. In contrast, a small γ (0 ∼ 0.5) mitigates this overfitting and achieves a better accuracy (see Fig. 6). But a too-small γ (e.g., 0) gives a high error for both stale features and gradients (see Fig. 7), thus suffering from a slower convergence. Therefore, a trade-off between convergence speed and achievable optimality exists between different smoothing decay rates, and γ = 0.5 combines the best of both worlds in this study.
4.5 SCALING LARGE GRAPH TRAINING OVER MULTIPLE SERVERS
To further test the capability of PipeGCN, we scale up the graph size to ogbn-papers100M and train GCN over multiple GPU servers with 32 GPUs. Tab. 5 shows that even at such a large-scale setting where communication overhead dominates, PipeGCN still reduce communication time by 61%, leading to a total training time reduction of 38% compared to the vanilla GCN baseline 4.
5 CONCLUSION
In this work, we propose a new method, PipeGCN, for efficient full-graph GCN training. PipeGCN pipelines communication with computation in distributed GCN training to hide the prohibitive communication overhead. More importantly, we are the first to provide convergence analysis for GCN training with both stale features and feature gradients, and further propose a light-weight smoothing method for convergence speedup. Extensive experiments validate the advantages of PipeGCN over both vanilla GCN training (without staleness) and state-of-the-art full-graph training.
4More experiments on multi-server training can be found in Appendix E.
6 ACKNOWLEDGEMENT
The work is supported by the National Science Foundation (NSF) through the MLWiNS program (Award number: 2003137), the CC∗ Compute program (Award number: 2019007), and the NeTS program (Award number: 1801865).
A CONVERGENCE PROOF
In this section, we prove the convergence of PipeGCN. Specifically, we first figure out that when the model is updated via gradient descent, the change of intermediate features and their gradients are bounded by a constant which is proportional to learning rate η under standard assumptions. Based on this, we further demonstrate that the error occurred by the staleness is proportional to η, which guarantees that the gradient error is bounded by ηE where E is defined in Corollary A.10, and thus PipeGCN converges in O(ε− 32 ) iterations.
A.1 NOTATIONS AND ASSUMPTIONS
For a given graph G = (V, E) with an adjacency matrixA, feature matrixX , we define the propagation matrix P as P := D̃−1/2ÃD̃−1/2, where à = A+ I, D̃u,u = ∑ v Ãu,v. One GCN layer performs one step of feature propagation (Kipf & Welling, 2016) as formulated below
H(0) = X
Z(`) = PH(`−1)W (`)
H(`) = σ(Z(`))
where H(`), W (`), and Z(`) denote the embedding matrix, the trainable weight matrix, and the intermediate embedding matrix in the `-th layer, respectively, and σ denotes a non-linear activation function. For an L-layer GCN, the loss function is denoted by L(θ) where θ = vec[W (1),W (2), · · · ,W (L)]. We define the `-th layer as a function f (`)(·, ·).
f (`)(H(`−1),W (`)) := σ(PH(`−1)W (`))
Its gradient w.r.t. the input embedding matrix can be represented as
J (`−1) = ∇Hf (`)(J (`), H(`−1),W (`)) := P>M (`)[W (`)]>
and its gradient w.r.t. the weight can be represented as
G(`) = ∇W f (`)(J (`), H(`−1),W (`)) := [PH(`−1)]>M (`)
where M (`) = J (`) ◦ σ′(PH(`−1)W (`)) and ◦ denotes Hadamard product. For partition-parallel training, we can split P into two parts P = Pin+Pbd where Pin represents intrapartition propagation and Pbd denotes inter-partition propagation. For PipeGCN, we can represent one GCN layer as below
H̃(t,0) = X
Z̃(t,`) = PinH̃ (t,`−1)W̃ (t,`) + PbdH̃ (t−1,`−1)W̃ (t,`)
H̃(t,`) = σ(Z̃(t,`))
where t is the epoch number and W̃ (t,`) is the weight at epoch t layer `. We define the loss function for this setting as L̃(θ̃(t)) where θ̃(t) = vec[W̃ (t,1), W̃ (t,2), · · · , W̃ (t,L)]. We can also summarize the layer as a function f̃ (t,`)(·, ·)
f̃ (t,`)(H̃(t,`−1), W̃ (t,`)) := σ(PinH̃ (t,`−1)W̃ (t,`) + PbdH̃ (t−1,`−1)W̃ (t,`))
Note that H̃(t−1,`−1) is not a part of the input of f̃ (t,`)(·, ·) because it is a constant for the t-th epoch. The corresponding backward propagation follows the following computation
J̃ (t,`−1) = ∇H f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`))
G̃(t,`) = ∇W f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) where
M̃ (t,`) = J̃ (t,`) ◦ σ′(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))
∇H f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) := P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]>
∇W f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) := [PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`)
Again, J̃ (t−1,`) is not a part of the input of∇H f̃ (t,`)(·, ·, ·) or∇W f̃ (t,`)(·, ·, ·) because it is a constant for epoch t. Finally, we define∇L̃(θ̃(t)) = vec[G̃(t,1), G̃(t,2), · · · , G̃(t,L)]. It should be highlighted that the ‘gradient’ ∇H f̃ (t,`)(·, ·, ·), ∇W f̃ (t,`)(·, ·, ·) and ∇L̃(θ̃(t)) are not the standard gradient for the corresponding forward process due to the stale communication. Properties of gradient cannot be directly applied to these variables.
Before proceeding our proof, we make the following standard assumptions about the adopted GCN architecture and input graph. Assumption A.1. The loss function Loss(·, ·) is Closs-Lipschitz continuous and Lloss-smooth w.r.t. to the input node embedding vector, i.e., |Loss(h(L), y)− Loss(h′(L), y)| ≤ Closs‖h(L) − h′(L)‖2 and ‖∇Loss(h(L), y)−∇Loss(h′(L), y)‖2 ≤ Lloss‖h(L) − h′(L)‖2 where h is the predicted label and y is the correct label vector.
Assumption A.2. The activation function σ(·) is Cσ-Lipschitz continuous and Lσ-smooth, i.e., ‖σ(z(`))− σ(z′(`))‖2 ≤ Cσ‖z(`) − z′(`)‖2 and ‖σ′(z(`))− σ′(z′(`))‖2 ≤ Lσ‖z(`) − z′(`)‖2. Assumption A.3. For any ` ∈ [L], the norm of weight matrices, the propagation matrix, and the input feature matrix are bounded: ‖W (`)‖F ≤ BW , ‖P‖F ≤ BP , ‖X‖F ≤ BX . (This generic assumption is also used in (Chen et al., 2018; Liao et al., 2020; Garg et al., 2020; Cong et al., 2021).)
A.2 BOUNDED MATRICES AND CHANGES
Lemma A.1. For any ` ∈ [L], the Frobenius norm of node embedding matrices, gradient passing from the `-th layer node embeddings to the (`− 1)-th, gradient matrices are bounded, i.e.,
‖H(`)‖F , ‖H̃(t,`)‖F ≤ BH ,
‖J (`)‖F , ‖J̃ (t,`)‖F ≤ BJ ,
‖M (`)‖F , ‖M̃ (t,`)‖F ≤ BM ,
‖G(`)‖F , ‖G̃(t,`)‖F ≤ BG where
BH = max 1≤`≤L
(CσBPBW ) `BX
BJ = max 2≤`≤L
(CσBPBW ) L−`Closs
BM = CσBJ
BG = BPBHBM
Proof. The proof of ‖H(`)‖F ≤ BH and ‖J (`)‖F ≤ BJ can be found in Proposition 1 in (Cong et al., 2021). By induction,
‖H̃(t,`)‖F = ‖σ(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))‖F ≤ CσBW ‖Pin + Pbd‖F (CσBPBW )`−1BX ≤ (CσBPBW )`BX
‖J̃ (t,`−1)‖F = ∥∥∥P>in (J̃ (t,`) ◦ σ′(Z̃(t,`))) [W̃ (t,`)]> + P>bd (J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))) [W̃ (t−1,`)]>∥∥∥
F
≤ CσBW ‖Pin + Pbd‖F (CσBPBW )L−`Closs ≤ (CσBPBW )L−`+1Closs
‖M (`)‖F = ‖J (`) ◦ σ′(Z(`))‖F ≤ CσBJ ‖M̃ (t,`)‖F = ‖J̃ (t,`) ◦ σ′(Z̃(t,`))‖F ≤ CσBJ
G(`) = [PH(`−1)]>M (`)
≤ BPBHBM
G̃(t,`) = [PinH̃ (t,`−1) + PbdH̃ (t−1,`−1)]>M̃ (t,`)
≤ BPBHBM
Because the gradient matrices are bounded, the weight change is bounded.
Corollary A.2. For any t, `, ‖W̃ (t,`) − W̃ (t−1,`)‖F ≤ B∆W = ηBG where η is the learning rate.
Now we can analyze the changes of intermediate variables.
Lemma A.3. For any t, `, we have ‖Z̃(t,`) − Z̃(t−1,`)‖F ≤ B∆Z , ‖H̃(t,`) − H̃(t−1,`)‖F ≤ B∆H , where B∆Z = L−1∑ i=0 CiσB i+1 P B i WBHB∆W and B∆H = CσB∆Z .
Proof. When ` = 0, ‖H̃(t,0)− H̃(t−1,0)‖F = ‖X−X‖F = 0. Now we consider ` > 0 by induction.
‖Z̃(t,`) − Z̃(t−1,`)‖F =‖(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))
− (PinH̃(t−1,`−1)W̃ (t−1,`) + PbdH̃(t−2,`−1)W̃ (t−1,`))‖F =‖Pin(H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`))
+ Pbd(H̃ (t−1,`−1)W̃ (t,`) − H̃(t−2,`−1)W̃ (t−1,`))‖F
Then we analyze the bound of ‖H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`)‖F which is denoted by s(t,`).
s(t,`) ≤ ‖H̃(t,`−1)W̃ (t,`) − H̃(t,`−1)W̃ (t−1,`)‖F + ‖H̃(t,`−1)W̃ (t−1,`) − H̃(t−1,`−1)W̃ (t−1,`)‖F ≤ BH‖W̃ (t,`) − W̃ (t−1,`)‖F +BW ‖H̃(t,`−1) − H̃(t−1,`−1)‖F
According to Corollary A.2, ‖W̃ (t,`) − W̃ (t−1,`)‖F ≤ B∆W . By induction, ‖H̃(t,`−1) − H̃(t−1,`−1)‖F ≤ `−2∑ i=0 Ci+1σ B i+1 P B i WBHB∆W . Combining these inequalities,
s(t,`) ≤ BHB∆W + `−1∑ i=1 CiσB i PB i WBHB∆W
Plugging it back, we have
‖Z̃(t,`) − Z̃(t−1,`)‖F ≤‖Pin(H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`))
+ Pbd(H̃ (t−1,`−1)W̃ (t,`) − H̃(t−2,`−1)W̃ (t−1,`))‖F
≤BP ( BHB∆W +
`−1∑ i=1 CiσB i PB i WBHB∆W
)
= `−1∑ i=0 CiσB i+1 P B i WBHB∆W
‖H̃(t,`) − H̃(t−1,`)‖F =‖σ(Z̃(t,`))− σ(Z̃(t−1,`))‖F ≤Cσ‖Z̃(t,`) − Z̃(t−1,`)‖F ≤CσB∆Z
Lemma A.4. ‖J̃ (t,`) − J̃ (t−1,`)‖F ≤ B∆J where
B∆J = max 2≤`≤L
(BPBWCσ) L−`B∆HLloss + (BMB∆W + LσBJB∆ZBW ) L−3∑ i=0 Bi+1P B i WC i σ
Proof. For the last layer (` = L), ‖J̃ (t,L)− J̃ (t−1,L)‖F ≤ Lloss‖H̃(t,L)− H̃(t−1,L)‖F ≤ LlossB∆H . For the case of ` < L, we prove the lemma by using induction.
‖J̃ (t,`−1) − J̃ (t−1,`−1)‖F = ∥∥∥(P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]>) − ( P>inM̃ (t−1,`)[W̃ (t−1,`)]> + P>bdM̃ (t−2,`)[W̃ (t−2,`)]> )∥∥∥ F
≤ ∥∥∥P>in (M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>)∥∥∥
F + ∥∥∥P>bd (M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−2,`)[W̃ (t−2,`)]>)∥∥∥
F
We denote ∥∥∥M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>∥∥∥
F by s(t,`) and analyze its bound.
s(t,`) ≤ ∥∥∥M̃ (t,`)[W̃ (t,`)]> − M̃ (t,`)[W̃ (t−1,`)]>∥∥∥
F + ∥∥∥M̃ (t,`)[W̃ (t−1,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>∥∥∥
F ≤BM ∥∥∥[W̃ (t,`)]> − [W̃ (t−1,`)]>∥∥∥
F +BW ∥∥∥M̃ (t,`) − M̃ (t−1,`)∥∥∥ F
According to Corollary A.2, ∥∥∥[W̃ (t,`)]> − [W̃ (t−1,`)]>∥∥∥
F ≤ B∆W . For the second term,
‖M̃ (t,`) − M̃ (t−1,`)‖F =‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))‖F ≤‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t,`) ◦ σ′(Z̃(t−1,`))‖F + ‖J̃ (t,`) ◦ σ′(Z̃(t−1,`))− J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))‖F ≤BJ‖σ′(Z̃(t,`))− σ′(Z̃(t−1,`))‖F + Cσ‖J̃ (t,`) − J̃ (t−1,`)‖F (5)
According to the smoothness of σ and Lemma A.3, ‖σ′(Z̃(t,`)) − σ′(Z̃(t−1,`))‖F ≤ LσB∆Z . By induction,
‖J̃ (t,`) − J̃ (t−1,`)‖F
≤ (BPBWCσ)(L−`)B∆HLloss + (BMB∆W + LσBJB∆ZBW ) L−`−1∑ i=0 Bi+1P B i WC i σ
As a result,
s(t,`) ≤BMB∆W +BWBJLσB∆Z +BWCσ‖J̃ (t,`) − J̃ (t−1,`)‖F =(BMB∆W +BWBJLσB∆Z) +B (L−`) P B (L−`+1) W C (L−`+1) σ B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=1 BiPB i WC i σ
≤B(L−`)P B (L−`+1) W C (L−`+1) σ B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=0 BiPB i WC i σ
‖J̃ (t,`−1) − J̃ (t−1,`−1)‖F = ∥∥∥P>in (M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>)∥∥∥
F + ∥∥∥P>bd (M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−2,`)[W̃ (t−2,`)]>)∥∥∥
F
≤BP s(t,`)
≤(BPBWCσ)(L−`+1)B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=0 Bi+1P B i WC i σ
From Equation 5, we can also conclude that
Corollary A.5. ‖M̃ (t,`) − M̃ (t−1,`)‖F ≤ B∆M with B∆M = BJLσB∆Z + CσB∆J .
A.3 BOUNDED FEATURE ERROR AND GRADIENT ERROR
In this subsection, we compare the difference between generic GCN and PipeGCN with the same parameter set, i.e., θ = θ̃(t).
Lemma A.6. ‖Z̃(t,`)−Z(`)‖F ≤ EZ ,‖H̃(t,`)−H(`)‖F ≤ EH whereEZ = B∆H L∑ i=1 Ci−1σ B i WB i P
and EH = B∆H L∑ i=1 (CσBWBP ) i.
Proof.
‖Z̃(t,`) − Z(`)‖F = ‖(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))− (PH(`−1)W (`))‖F ≤ ‖(PinH̃(t,`−1) + PbdH̃(t−1,`−1) − PH(`−1))W (`)‖F = BW ‖P (H̃(t,`−1) −H(`−1)) + Pbd(H̃(t−1,`−1) − H̃(t,`−1))‖F
≤ BWBP ( ‖H̃(t,`−1) −H(`−1)‖F +B∆H ) By induction, we assume that ‖H̃(t,`−1) −H(`−1)‖F ≤ B∆H
`−1∑ i=1 (CσBWBP ) i. Therefore,
‖Z̃(t,`) − Z(`)‖F ≤ BWBPB∆H `−1∑ i=0 (CσBWBP ) i
= B∆H ∑̀ i=1 Ci−1σ B i WB i P
‖H̃(t,`) −H(`)‖F = ‖σ(Z̃(t,`))− σ(Z(`))‖F ≤ Cσ‖Z̃(t,`) − Z(`)‖F
≤ B∆H ∑̀ i=1 (CσBWBP ) i
Lemma A.7. ‖J̃ (t,`) − J (`)‖F ≤ EJ and ‖M̃ (t,`) −M (`)‖F ≤ EM with
EJ = max 2≤`≤L
(BPBWCσ) L−`LlossEH+BP (BW (BJEZLσ+B∆M )+B∆WBM ) L−3∑ i=0 (BPBWCσ) i
EM = CσEJ + LσBJEZ
Proof. When ` = L, ‖J̃ (t,L) − J (L)‖F ≤ LlossEH . For any `, we assume that
‖J̃ (t,`) − J (`)‖F ≤ (BPBWCσ)L−`LlossEH + U L−`−1∑ i=0 (BPBWCσ) i (6)
‖M̃ (t,`) −M (`)‖F ≤ (BPBWCσ)L−`CσLlossEH + UCσ L−`−1∑ i=0 (BPBWCσ) i + LσBJEZ (7)
where U = BP (BWBJEZLσ +B∆WBM +BWB∆M ). We prove them by induction as follows.
‖M̃ (t,`) −M (`)‖F = ‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J (`) ◦ σ′(Z(`))‖F ≤ ‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t,`) ◦ σ′(Z(`))‖F + ‖J̃ (t,`) ◦ σ′(Z(`))− J (`) ◦ σ′(Z(`))‖F ≤ BJ‖σ′(Z̃(t,`))− σ′(Z(`))‖F + Cσ‖J̃ (t,`) − J (`)‖F
Here ‖σ′(Z̃(t,`))− σ′(Z(`))‖F ≤ LσEZ . With Equation 6,
‖M̃ (t,`) −M (`)‖F ≤ (BPBWCσ)L−`CσLlossEH + UCσ L−`−1∑ i=0 (BPBWCσ) i + LσBJEZ
On the other hand, ‖J̃ (t,`−1) − J (`−1)‖F = ‖P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]> − P>M (`)[W (`)]>‖F = ‖P>(M̃ (t,`) −M (`))[W (`)]> + P>bd(M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>)‖F ≤ ‖P>(M̃ (t,`) −M (`))[W (`)]>‖F + ‖P>bd(M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>)‖F ≤ BPBW ‖M̃ (t,`) −M (`)‖F +BP ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F
The first part is bounded by Equation 7. For the second part,
‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F ≤ ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−1,`)[W̃ (t,`)]>‖F + ‖M̃ (t−1,`)[W̃ (t,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F ≤ B∆WBM +BWB∆M
Therefore, ‖J̃ (t,`−1) − J (`−1)‖F ≤ BPBW ‖M̃ (t,`) −M (`)‖F +BP ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F
≤ (BPBWCσ)L−`+1LlossEH + U L−∑̀ i=1 (BPBWCσ) i + U
= (BPBWCσ) L−`+1LlossEH + U L−∑̀ i=0 (BPBWCσ) i
Lemma A.8. ‖G̃(t,`) −G(`)‖F ≤ EG where EG = BP (BHEM +BMEH)
Proof.
‖G̃(t,`) −G(`)‖F = ∥∥∥[PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`) − [PH(`)]>M (`)∥∥∥
F ≤ ∥∥∥[PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`) − [PH(`−1)]>M̃ (t,`)∥∥∥
F + ∥∥∥[PH(`−1)]>M̃ (t,`) − [PH(`−1)]>M (`)∥∥∥
F
≤BM (‖P (H̃(t,`−1) −H(`−1)) + Pbd(H̃(t−1,`−1) − H̃(t,`−1))‖F ) +BPBHEM ≤BMBP (EH +B∆H) +BPBHEM
By summing up from ` = 1 to ` = L to both sides, we have
Corollary A.9. ‖∇L̃(θ)−∇L(θ)‖2 ≤ Eloss where Eloss = LEG.
According to the derivation of Eloss, we observe that Eloss contains a factor η. To simplify the expression of Eloss, we assume that BPBWCσ ≤ 12 without loss of generality, and rewrite Corollary A.9 as the following.
Corollary A.10. ‖∇L̃(θ)−∇L(θ)‖2 ≤ ηE where
E = 1
8 LB3PB 2 XClossCσ
( 3BXC 2 σLloss + 6BXClossLσ + 10ClossC 2 σ ) A.4 PROOF OF THE MAIN THEOREM
We first introduce a lemma before the proof of our main theorem. Lemma A.11 (Lemma 1 in (Cong et al., 2021)). An L-layer GCN is Lf -Lipschitz smoothness, i.e., ‖∇L(θ1)−∇L(θ2)‖2 ≤ Lf‖θ1 − θ2‖2.
Now we prove the main theorem. Theorem A.12 (Convergence of PipeGCN, formal). Under Assumptions A.1, A.2, and A.3, we can derive the following by choosing a learning rate η = √ ε E and number of training iterations T = (L(θ(1))− L(θ∗))Eε− 32 : 1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ 3ε
where E is defined in Corollary A.10, ε > 0 is an arbitrarily small constant, L(·) is the loss function, θ(t) and θ∗ represent the parameter vector at iteration t and the optimal parameter respectively.
Proof. With the smoothness of the model, L(θ(t+1)) ≤ L(θ(t)) + 〈 ∇L(θ(t)), θ(t+1) − θ(t) 〉 + Lf 2 ‖θ(t+1) − θ(t)‖22
= L(θ(t))− η 〈 ∇L(θ(t)),∇L̃(θ(t)) 〉 + η2Lf 2 ‖∇L̃(θ(t))‖22
Let δ(t) = ∇L̃(θ(t))−∇L(θ(t)) and η ≤ 1/Lf , we have L(θ(t+1)) ≤ L(θ(t))− η 〈 ∇L(θ(t)),∇L(θ(t)) + δ(t) 〉 + η
2 ‖∇L(θ(t)) + δ(t)‖22
≤ L(θ(t))− η 2 ‖∇L(θ(t))‖22 + η 2 ‖δ(t)‖22
From Corollary A.10 we know that ‖δ(t)‖2 < ηE. After rearranging the terms,
‖∇L(θ(t))‖22 ≤ 2
η (L(θ(t))− L(θ(t+1))) + η2E2
Summing up from t = 1 to T and taking the average,
1
T T∑ t=1 ‖∇L(θ(t))‖22 ≤ 2 ηT (L(θ(1))− L(θ(T+1))) + η2E2
≤ 2 ηT (L(θ(1))− L(θ∗)) + η2E2
where θ∗ is the minimum point of L(·). By taking η = √ ε E and T = (L(θ
(1))− L(θ∗))Eε− 32 with an arbitrarily small constant ε > 0, we have
1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ 3ε
B TRAINING TIME BREAKDOWN OF FULL-GRAPH TRAINING METHODS
To understand why PipeGCN significantly boosts the training throughput over full-graph training methods, we provide the detailed time breakdown in Tab. 6 using the same model as Tab. 3 (4-layer GraphSAGE, 256 hidden units), in which “GCN” denotes the vanilla partition-parallel training illustrated in Fig. 1(a). We observe that PipeGCN greatly saves communication time.
C TRAINING TIME IMPROVEMENT BREAKDOWN OF PIPEGCN
To understand the training time improvement offered by PipeGCN, we further breakdown the epoch time into three parts (intra-partition computation, inter-partition communication, and reduce for aggregating model gradient) and provide the result in Fig. 8. We can observe that: 1) interpartition communication dominates the training time in vanilla partition-parallel training (GCN); 2) PipeGCN (with or without smoothing) greatly hides the communication overhead across different number of partitions and all datasets, e.g., the communication time is hidden completely in 2-partition Reddit and almost completely in 3-partition Yelp, thus the substantial reduction in training time; and 3) the proposed smoothing incurs only minimal overhead (i.e., minor difference between PipeGCN and PipeGCN-GF). Lastly, we also notice that when communication ratio is extremely large (85%+), PipeGCN hides communication significantly but not completely (e.g., 10-partition ogbn-products), in which case we can employ those compression and quantization techniques (Alistarh et al. (2017); Seide et al. (2014); Wen et al. (2017); Li et al. (2018a); Yu et al. (2018)) from the area of general distributed SGD for further reducing the communication, as the compression is orthogonal to the pipeline method. Besides compression, we can also increase the pipeline depth of PipeGCN, e.g., using two iterations of compute to hide one iteration of communication, which is left to our future work.
D MAINTAINING CONVERGENCE SPEED (ADDITIONAL EXPERIMENTS)
We provide the additional convergence curves on Yelp in Fig. 9. We can see that PipeGCN and its variants maintain the convergence speed w.r.t the number of epochs while substantially reducing the end-to-end training time.
E SCALING GCN TRAINING OVER MULTIPLE GPU SERVERS
We also scale up PipeGCN training over multiple GPU servers (each contains AMD Radeon Instinct MI60 GPUs, an AMD EPYC 7642 CPU, and 48 lane PCI 3.0 connecting CPU-GPU and GPU-GPU) networked with 10Gbps Ethernet.
The accuracy results of PipeGCN and its variants are summarized in Tab. 7:
Furthermore, we provide PipeGCN’s speedup against vanilla partition-parallel training in Tab. 8:
From the two tables above, we can observe that our PipeGCN family consistently maintains the accuracy of the full-graph training, while improving the throughput by 15%∼66% regardless of the machine settings and number of partitions.
F IMPLEMENTATION DETAILS
We discuss the details of the effective and efficient implementation of PipeGCN in this section.
First, for parallel communication and computation, a second cudaStream is required for communication besides the default cudaStream for computation. To also save memory buffers for communication, we batch all communication (e.g., from different layers) into this second cudaStream. When the popular communication backend, Gloo, is used, we parallelize the CPU-GPU transfer with CPU-CPU transfer.
Second, when Dropout layer is used in GCN model, it should be applied after communication. The implementation of the dropout layer for PipeGCN should be considered carefully so that the dropout mask remains consistent for the input tensor and corresponding gradient. If the input feature passes through the dropout layer before being communicated, during the backward phase, the dropout mask is changed and the gradient of masked values is involved in the computation, which introduces noise to the calculation of followup gradients. As a result, the dropout layer can only be applied after receiving boundary features. | 1. What is the focus of the paper regarding distributed full-graph GCN training?
2. What are the strengths of the proposed approach, particularly in addressing efficiency bottlenecks?
3. What are the weaknesses of the paper, especially regarding the experimental setup and results?
4. Do you have any concerns about the scalability of the proposed method?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose PipeGCN - a method for efficiently distributed full-graph GCN training. The method pipelines communication with computation in distributed GCN training to hide substantial communication overhead. The paper leads an effort to study pipelined and asynchronous distributed training of GCNs with a new smoothing method aimed at reducing the error incurred by stale features/feature gradients at minimal overhead.
Review
Strengths
It analyzes two efficiency bottlenecks in distributed training, including communication overhead and synchronization, and proposes PipeGCN to pipelining the inter-partition communication with intra-partition computation.
The paper provides novel theoretical proof of the convergence of GCN training with stale feature and feature gradients, which is useful for future work.
The experiment results show a significant speedup of the training for the GCN model on multiple reported datasets and compares it with the other latest methods.
Weakness
The PipeGCN aims at scaling graph neural network training, but the setup of the largest dataset, ogbn-papers100M, is not practical. With only 2 layers and 8 hidden units, the GNN may not learn anything from the graph. Also, the accuracy on ogbn-papers100M is missed in the result table. With that in mind, I am concerned about the scalability of the system.
For each dataset, the paper only shows the results for a fixed number of partitions. It would be great to see how the speed and convergence differ when the number of partitions and computation nodes increases.
The results in Table 6 are vague, without showing the dataset used and what is the dist GCN method. |
ICLR | Title
PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Abstract
Graph Convolutional Networks (GCNs) is the state-of-the-art method for learning graph-structured data, and training large-scale GCNs requires distributed training across multiple accelerators such that each accelerator is able to hold a partitioned subgraph. However, distributed GCN training incurs prohibitive overhead of communicating node features and feature gradients among partitions for every GCN layer during each training iteration, limiting the achievable training efficiency and model scalability. To this end, we propose PipeGCN, a simple yet effective scheme that hides the communication overhead by pipelining inter-partition communication with intra-partition computation. It is non-trivial to pipeline for efficient GCN training, as communicated node features/gradients will become stale and thus can harm the convergence, negating the pipeline benefit. Notably, little is known regarding the convergence rate of GCN training with both stale features and stale feature gradients. This work not only provides a theoretical convergence analysis but also finds the convergence rate of PipeGCN to be close to that of the vanilla distributed GCN training without any staleness. Furthermore, we develop a smoothing method to further improve PipeGCN’s convergence. Extensive experiments show that PipeGCN can largely boost the training throughput (1.7×∼28.5×) while achieving the same accuracy as its vanilla counterpart and existing full-graph training methods. The code is available at https://github.com/RICE-EIC/PipeGCN.
1 INTRODUCTION
Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016) have gained great popularity recently as they demonstrated the state-of-the-art (SOTA) performance in learning graph-structured data (Zhang & Chen, 2018; Xu et al., 2018; Ying et al., 2018). Their promising performance is resulting from their ability to capture diverse neighborhood connectivity. In particular, a GCN aggregates all features from the neighbor node set for a given node, the feature of which is then updated via a multi-layer perceptron. Such a two-step process (neighbor aggregation and node update) empowers GCNs to better learn graph structures. Despite their promising performance, training GCNs at scale is still a challenging problem, as a prohibitive amount of compute and memory resources are required to train a real-world large-scale graph, let alone exploring deeper and more advanced models. To overcome this challenge, various sampling-based methods have been proposed to reduce the resource requirement at a cost of incurring feature approximation errors. A straightforward instance is to create mini-batches by sampling neighbors (e.g., GraphSAGE (Hamilton et al., 2017) and VR-GCN (Chen et al., 2018)) or to extract subgraphs as training samples (e.g., Cluster-GCN (Chiang et al., 2019) and GraphSAINT (Zeng et al., 2020)).
In addition to sampling-based methods, distributed GCN training has emerged as a promising alternative, as it enables large full-graph training of GCNs across multiple accelerators such as GPUs.
This approach first partitions a giant graph into multiple small subgraps, each of which is able to fit into a single GPU, and then train these partitioned subgraphs locally on GPUs together with indispensable communication across partitions. Following this direction, several recent works (Ma et al., 2019; Jia et al., 2020; Tripathy et al., 2020; Thorpe et al., 2021; Wan et al., 2022) have been proposed and verified the great potential of distributed GCN training. P 3 (Gandhi & Iyer, 2021) follows another direction that splits the data along the feature dimension and leverages intra-layer model parallelism for training, which shows superior performance on small models.
In this work, we propose a new method for distributed GCN training, PipeGCN, which targets achieving a full-graph accuracy with boosted training efficiency. Our main contributions are following:
• We first analyze two efficiency bottlenecks in distributed GCN training: the required significant communication overhead and frequent synchronization, and then propose a simple yet effective technique called PipeGCN to address both aforementioned bottlenecks by pipelining inter-partition communication with intra-partition computation to hide the communication overhead.
• We address the challenge raised by PipeGCN, i.e., the resulting staleness in communicated features and feature gradients (neither weights nor weight gradients), by providing a theoretical convergence analysis and showing that PipeGCN’s convergence rate is O(T− 23 ), i.e., close to vanilla distributed GCN training without staleness. To the best of our knowledge, we are the first to provide a theoretical convergence proof of GCN training with both stale feature and stale feature gradients.
• We further propose a low-overhead smoothing method to further improve PipeGCN’s convergence by reducing the error incurred by the staleness.
• Extensive empirical and ablation studies consistently validate the advantages of PipeGCN over both vanilla distributed GCN training and those SOTA full-graph training methods (e.g., boosting the training throughput by 1.7×∼28.5× while achieving the same or a better accuracy).
2 BACKGROUND AND RELATED WORKS
Graph Convolutional Networks. GCNs represent each node in a graph as a feature (embedding) vector and learn the feature vector via a two-step process (neighbor aggregation and then node update) for each layer, which can be mathematically described as:
z(`)v = ζ (`) ({ h(`−1)u | u ∈ N (v) }) (1)
h(`)v = φ (`) ( z(`)v , h (`−1) v ) (2)
where N (v) is the neighbor set of node v in the graph, h(`)v represents the learned embedding vector of node v at the `-th layer, z(`)v is an intermediate aggregated feature calculated by an aggregation function ζ(`), and φ(`) is the function for updating the feature of node v. The original GCN (Kipf & Welling, 2016) uses a weighted average aggregator for ζ(`) and the update function φ(`) is a single-layer perceptron σ(W (`)z(`)v ) where σ(·) is a non-linear activation function and W (`) is a weight matrix. Another famous GCN instance is GraphSAGE (Hamilton et al., 2017) in which φ(`) is σ ( W (`) · CONCAT ( z (`) v , h (`−1) v )) .
Distributed Training for GCNs. A real-world graph can contain millions of nodes and billions of edges (Hu et al., 2020), for which a feasible training approach is to partition it into small subgraphs (to fit into each GPU’s resource), and train them in parallel, during which necessary communication is performed to exchange boundary node features and gradients to satisfy GCNs’s neighbor aggregation (Equ. 1). Such an approach is called vanilla partition-parallel training and is illustrated in Fig. 1 (a). Following this approach, several works have been proposed recently. NeuGraph (Ma et al., 2019), AliGraph (Zhu et al., 2019), and ROC (Jia et al., 2020) perform such partition-parallel training but rely on CPUs for storage for all partitions and repeated swapping of a partial partition to GPUs. Inevitably, prohibitive CPU-GPU swaps are incurred, plaguing the achievable training efficiency. CAGNET (Tripathy et al., 2020) is different in that it splits each node feature vector into tiny sub-vectors which are then broadcasted and computed sequentially, thus requiring redundant communication and frequent synchronization. Furthermore, P 3 (Gandhi & Iyer, 2021) proposes to split both the feature and the GCN layer for mitigating the communication overhead, but it makes a strong assumption that the hidden dimensions of a GCN should be considerably smaller than that of
(a) Vanilla partition-parallel training
2 5
1
6
3
4
Graph
Inner Node Boundary Node
Communicate Boundary Feature & Grad
Pipeline
Communicate Compute ...
... ... Iteration #2Iteration #1
52 61
Part 3 ...
Part 1 ...
4
2
5
1
6 4
Communicate Compute ...
Iteration #2
52 61
...
...
... ...Communicate Compute Compute
Communicate
(b) Timeline of vanilla partition-parallel training (c) PipeGCN
2
6
5
3 Part 2
Iteration #1
Timeline of (a)
3 4 52 3 4 52
... ...
... ...
Timeline of PipeGCN
Part 1 Part 2 Part 3 Partition
1 2
6 5
3
4
Figure 1: An illustrative comparison between vanilla partition-parallel training and PipeGCN.
input features, which restricts the model size. A concurrent work Dorylus (Thorpe et al., 2021) adopts a fine-grained pipeline along each compute operation in GCN training and supports asynchronous usage of stale features. Nevertheless, the resulting staleness of feature gradients is neither analyzed nor considered for convergence proof, let alone error reduction methods for the incurred staleness.
Asynchronous Distributed Training. Many prior works have been proposed for asynchronous distributed training of DNNs. Most works (e.g., Hogwild! (Niu et al., 2011), SSP (Ho et al., 2013), and MXNet (Li et al., 2014)) rely on a parameter server with multiple workers running asynchronously to hide communication overhead of weights/(weight gradients) among each other, at a cost of using stale weight gradients from previous iterations. Other works like Pipe-SGD (Li et al.,
2018b) pipeline such communication with local computation of each worker. Another direction is to partition a large model along its layers across multiple GPUs and then stream in small data batches through the layer pipeline, e.g., PipeDream (Harlap et al., 2018) and PipeMare (Yang et al., 2021). Nonetheless, all these works aim at large models with small data, where communication overhead of model weights/weight gradients are substantial but data feature communications are marginal (if not none), thus not well suited for GCNs. More importantly, they focus on convergence with stale weight gradients of models, rather than stale features/feature gradients incurred in GCN training. Tab. 1 summarizes the differences. In a nutshell, little effort has been made to study asynchronous or pipelined distributed training of GCNs, where feature communication plays the major role, let alone the corresponding theoretical convergence proofs.
GCNs with Stale Features/Feature Gradients. Several recent works have been proposed to adopt either stale features (Chen et al., 2018; Cong et al., 2020) or feature gradients (Cong et al., 2021) in single-GPU training of GCNs. Nevertheless, their convergence analysis considers only one of two kinds of staleness and derives a convergence rate of O(T− 12 ) for pure sampling-based methods. This is, however, limited in distributed GCN training as its convergence is simultaneously affected by both kinds of staleness. PipeGCN proves such convergence with both stale features and feature gradients and offers a better rate of O(T− 23 ). Furthermore, none of previous works has studied the errors incurred by staleness which harms the convergence speed, while PipeGCN develops a low-overhead smoothing method to reduce such errors.
3 THE PROPOSED PIPEGCN FRAMEWORK
Overview. To enable efficient distributed GCN training, we first identify the two bottlenecks associated with vanilla partition-parallel training: substantial communication overhead and frequently synchronized communication (see Fig. 1(b)), and then address them directly by proposing a novel strategy, PipeGCN, which pipelines the communication and computation stages across two adjacent iterations in each partition of distributed GCN training for breaking the synchrony and then hiding the communication latency (see Fig. 1(c)). It is non-trivial to achieve efficient GCN training with such a pipeline method, as staleness is incurred in communicated features/feature gradients and
more importantly little effort has been made to study the convergence guarantee of GCN training using stale feature gradients. This work takes an initial effort to prove both the theoretical and empirical convergence of such a pipelined GCN training method, and for the first time shows its convergence rate to be close to that of vanilla GCN training without staleness. Furthermore, we propose a low-overhead smoothing method to reduce the errors due to stale features/feature gradients for further improving the convergence.
3.1 BOTTLENECKS IN VANILLA PARTITION-PARALLEL TRAINING
Significant communication overhead. Fig. 1(a) illustrates vanilla partition-parallel training, where each partition holds inner nodes that come from the original graph and boundary nodes that come from other subgraphs. These boundary nodes are demanded by the neighbor aggregation of GCNs across neighbor partitions, e.g., in Fig. 1(a) node-5 needs nodes-[3,4,6] from other partitions for calculating Equ. 1. Therefore, it is the features/gradients of boundary nodes that dominate the communication overhead in distributed GCN training. Note that the amount of boundary nodes can be excessive and far exceeds the inner nodes, as the
boundary nodes are replicated across partitions and scale with the number of partitions. Besides the sheer size, communication of boundary nodes occurs for (1) each layer and (2) both forward and backward passes, making communication overhead substantial. We evaluate such overhead1 in Tab. 2 and find communication to be dominant, which is consistent with CAGNET (Tripathy et al., 2020).
Frequently synchronized communication. The aforementioned communication of boundary nodes must be finished before calculating Equ. 1 and Equ. 2, which inevitably forces synchronization between communication and computation and requires a fully sequential execution (see Fig. 1(b)). Thus, for most of training time, each partition is waiting for dominant features/gradients communication to finish before the actual compute, repeated for each layer and for both forward and backward passes.
3.2 THE PROPOSED PIPEGCN METHOD
Fig. 1(c) illustrates the high-level overview of PipeGCN, which pipelines the communicate and compute stages spanning two iterations for each GCN layer. Fig. 2 further provides the detailed end-to-end flow, where PipeGCN removes the heavy communication overhead in the vanilla approach by breaking the synchronization between communicate and compute and hiding communicate with compute of each GCN layer. This is achieved by deferring the communicate to next iteration’s compute (instead of serving the current iteration) such that compute and communicate can run in
1The detailed setting can be found in Sec. 4.
Algorithm 1: Training a GCN with PipeGCN (per-partition view). Input: partition id i, partition count n, graph partition Gi, propagation matrix Pi, node feature Xi, label Yi, boundary node set Bi, layer count L, learning rate η, initial model W0 Output: trained model WT after T iterations
1 Vi ← {node v ∈ Gi : v /∈ Bi} . create inner node set 2 Broadcast Bi and Receive [B1, · · · ,Bn] 3 [Si,1, · · · ,Si,n]← [B1 ∩ Vi, · · · ,Bn ∩ Vi] 4 Broadcast Vi and Receive [V1, · · · ,Vn] 5 [S1,i, · · · ,Sn,i]← [Bi ∩ V1, · · · ,Bi ∩ Vn]
6 H(0) ← [ Xi 0 ] . initialize node feature, set boundary feature as 0
7 for t := 1→ T do 8 for ` := 1→ L do . forward pass 9 if t > 1 then 10 wait until thread(`)f completes 11 [H (`−1) S1,i , · · · , H (`−1) Sn,i ]← [B (`) 1 , · · · , B (`) n ] . update boundary feature 12 end 13 with thread(`)f . communicate boundary features in parallel 14 Send [H(`−1)Si,1 , · · · , H (`−1) Si,n ] to partition [1, · · · , n] and Receive [B (`) 1 , · · · , B (`) n ] 15 H (`) Vi ← σ(PiH (`−1)W (`) t−1) . update inner nodes feature 16 end
17 J (L) Vi ←
∂Loss(H (L) Vi ,Yi)
∂H (L) Vi
18 for ` := L→ 1 do . backward pass 19 G
(`) i ← [ PiH (`−1) ]> ( J (`) Vi ◦ σ ′(PiH (`−1)W (`) t−1) ) . calculate weight gradient
20 if ` > 1 then 21 J(`−1) ← P>i ( J (`) Vi ◦ σ ′(PiH (`−1)W (`) t−1) ) [W (`) t−1] > . calculate feature gradient 22 if t > 1 then 23 wait until thread(`)b completes 24 for j := 1→ n do 25 J
(`−1) Si,j ← J (`−1) Si,j + C (`) j . accumulate feature gradient
26 end 27 end 28 with thread(`)b . communicate boundary feature gradient in parallel 29 Send [J(`−1)S1,i , · · · , J (`−1) Sn,i ] to partition [1, · · · , n] and Receive [C (`) 1 , · · · , C (`) n ] 30 end 31 end 32 G← AllReduce(Gi) . synchronize model gradient 33 Wt ←Wt−1 − η ·G . update model 34 end 35 return WT
parallel. Inevitably, staleness is introduced in the deferred communication and results in a mixture usage of fresh inner features/gradients and staled boundary features/gradients.
Analytically, PipeGCN is achieved by modifying Equ. 1. For instance, when using a mean aggregator, Equ. 1 and its corresponding backward formulation in PipeGCN become:
z(t,`)v = MEAN ( {h(t,`−1)u | u ∈ N (v) \ B(v)} ∪ {h(t−1,`−1)u | u ∈ B(v)} ) (3)
δ (t,`) hu
= ∑
v:u∈N (v)\B(v)
1
dv · δ(t,`+1)zv + ∑ v:u∈B(v) 1 dv · δ(t−1,`+1)zv (4)
where B(v) is node v’s boundary node set, dv denotes node v’s degree, and δ(t,`)hu and δ (t,`) zv represent the gradient approximation of hu and zv at layer ` and iteration t, respectively. Lastly, the implementation of PipeGCN are outlined in Alg. 1.
3.3 PIPEGCN’S CONVERGENCE GUARANTEE
As PipeGCN adopts a mixture usage of fresh inner features/gradients and staled boundary features/gradients, its convergence rate is still unknown. We have proved the convergence of PipeGCN and present the convergence property in the following theorem. Theorem 3.1 (Convergence of PipeGCN, informal version). There exists a constant E such that for any arbitrarily small constant ε > 0, we can choose a learning rate η = √ ε E and number of training iterations T = (L(θ(1))− L(θ∗))Eε− 32 such that:
1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ O(ε)
where L(·) is the loss function, θ(t) and θ∗ represent the parameter vector at iteration t and the optimal parameter respectively.
Therefore, the convergence rate of PipeGCN is O(T− 23 ), which is better than sampling-based method (O(T− 12 )) (Chen et al., 2018; Cong et al., 2021) and close to full-graph training (O(T−1)). The formal version of the theorem and our detailed proof can be founded in Appendix A.
3.4 THE PROPOSED SMOOTHING METHOD
To further improve the convergence of PipeGCN, we propose a smoothing method to reduce errors incurred by stale features/feature gradients at a minimal overhead. Here we present the smoothing of feature gradients, and the same formulation also applies to features. To improve the approximate gradients for each feature, fluctuations in feature gradients between adjacent iterations should be reduced. Therefore, we apply a light-weight moving average to the feature gradients of each boundary node v as follow:
δ̂(t,`)zv = γδ̂ (t−1,`) zv + (1− γ)δ (t,`) zv
where δ̂(t,`)zv is the smoothed feature gradient at layer ` and iteration t, and γ is the decay rate. When integrating this smoothed feature gradient method into the backward pass, Equ. 4 can be rewritten as:
δ̂ (t,`) hu
= ∑
v:u∈N (v)\B(v)
1
dv · δ(t,`+1)zv + ∑ v:u∈B(v) 1 dv · δ̂(t−1,`+1)zv
Note that the smoothing of stale features and gradients can be independently applied to PipeGCN.
4 EXPERIMENT RESULTS
We evaluate PipeGCN on four large-scale datasets, Reddit (Hamilton et al., 2017), ogbn-products (Hu et al., 2020), Yelp (Zeng et al., 2020), and ogbn-papers100M (Hu et al., 2020). More details are provided in Tab. 3. To ensure robustness and reproducibility, we fix (i.e., do not tune) the hyperparameters and settings for PipeGCN and its variants throughout all experiments. To implement partition parallelism (for both vanilla distributed GCN training and PipeGCN), the widely used METIS (Karypis & Kumar, 1998) partition algorithm is adopted for graph partition with its objective set to minimize the communication volume. We implement PipeGCN in PyTorch (Paszke et al., 2019) and DGL (Wang et al., 2019). Experiments are conducted on a machine with 10 RTX-2080Ti (11GB), Xeon 6230R@2.10GHz (187GB), and PCIe3x16 connecting CPU-GPU and GPU-GPU. Only for ogbn-papers100M, we use 4 compute nodes (each contains 8 MI60 GPUs, an AMD EPYC 7642 CPU, and 48 lane PCI 3.0 connecting CPU-GPU and GPU-GPU) networked with 10Gbps Ethernet. To support full-graph GCN training with the model sizes in Tab. 3, the minimum required partition numbers are 2, 3, 5, 32 for Reddit, ogbn-products, Yelp, and ogbn-papers100M, respectively.
For convenience, we here name all methods: vanilla partition-parallel training of GCNs (GCN), PipeGCN with feature gradient smoothing (PipeGCN-G), PipeGCN with feature smoothing (PipeGCN-F), and PipeGCN with both smoothing (PipeGCN-GF). The default decay rate γ for all smoothing methods is set to 0.95.
4.1 IMPROVING TRAINING THROUGHPUT OVER FULL-GRAPH TRAINING METHODS
Fig. 3 compares the training throughput between PipeGCN and the SOTA full-graph training methods (ROC (Jia et al., 2020) and CAGNET (Tripathy et al., 2020)). We observe that both vanilla partitionparallel training (GCN) and PipeGCN greatly outperform ROC and CAGNET across different number of partitions, because they avoid both the expensive CPU-GPU swaps (ROC) and the redundant node broadcast (CAGNET). Specifically, GCN is 3.1×∼16.4× faster than ROC and 2.1×∼10.2× faster than CAGNET (c=2). PipeGCN further improves upon GCN, achieving a throughput improvement of 5.6×∼28.5× over ROC and 3.9×∼17.7× over CAGNET (c=2)2. Note that we are not able to compare PipeGCN with NeuGraph (Ma et al., 2019), AliGraph (Zhu et al., 2019), and P 3 (Gandhi & Iyer, 2021) as their code are not publicly available. Besides, Dorylus (Thorpe et al., 2021) is not comparable, as it is not for regular GPU servers. Considering the substantial performance gap between ROC/CAGNET and GCN, we focus on comparing GCN with PipeGCN for the reminder of the section.
4.2 IMPROVING TRAINING THROUGHPUT WITHOUT COMPROMISING ACCURACY
We compare the training performance of both test score and training throughput between GCN and PipeGCN in Tab. 4. We can see that PipeGCN without smoothing already achieves a comparable test score with the vanilla GCN training on both Reddit and Yelp, and incurs only a negligible accuracy drop (-0.08%∼-0.23%) on ogbn-products, while boosting the training throughput by 1.72×∼2.16× across all datasets and different number of partitions3, thus validating the effectiveness of PipeGCN.
With the proposed smoothing method plugged in, PipeGCN-G/F/GF is able to compensate the dropped score of vanilla PipeGCN, achieving an equal or even better test score as/than the vanilla GCN training (without staleness), e.g., 97.14% vs. 97.11% on Reddit, 79.36% vs. 79.14% on ogbn-products and 65.28% vs. 65.26% on Yelp. Meanwhile, PipeGCN-G/F/GF enjoys a similar throughput improvement as vanilla PipeGCN, thus validating the negligible overhead of the proposed smoothing method. Therefore, pipelined transfer of features and gradients greatly improves the training throughput while maintaining the full-graph accuracy.
Note that our distributed GCN training methods consistently achieve higher test scores than SOTA sampling-based methods for GraphSAGE-based models reported in (Zeng et al., 2020) and (Hu et al., 2020), confirming that full-graph training is preferred to obtain better GCN models. For example, the best sampling-based method achieves a 96.6% accuracy on Reddit (Zeng et al., 2020) while full-graph GCN training achieves 97.1%, and PipeGCN improves the accuracy by 0.28% over sampling-based GraphSAGE models on ogbn-products (Hu et al., 2020). This advantage of full-graph training is also validated by recent works (Jia et al., 2020; Tripathy et al., 2020; Liu et al., 2022; Wan et al., 2022).
2More detailed comparisons among full-graph training methods can be found in Appendix B. 3More details regarding PipeGCN’s advantages in training throughput can be found in Appendix C.
4.3 MAINTAINING CONVERGENCE SPEED
To understand PipeGCN’s influence on the convergence speed, we compare the training curve among different methods in Fig. 4. We observe that the convergence of PipeGCN without smoothing is still comparable with that of the vanilla GCN training, although PipeGCN converges slower at the early phase of training and then catches up at the later phase, due to the staleness of boundary features/gradients. With the proposed smoothing methods, PipeGCN-G/F boosts the convergence substantially and matches the convergence speed of vanilla GCN training. There is no clear difference between PipeGCN-G and PipeGCN-F. Lastly, with combined smoothing of features and gradients, PipeGCN-GF can acheive the same or even slightly better convergence speed as vanilla GCN training (e.g., on Reddit) but can overfit gradually similar to the vanilla GCN training, which is further investigated in Sec. 4.4. Therefore, PipeGCN maintains the convergence speed w.r.t the number of epochs while reduces the end-to-end training time by around 50% thanks to its boosted training throughput (see Tab. 4).
4.4 BENEFIT OF STALENESS SMOOTHING METHOD
Error Reduction and Convergence Speedup. To understand why the proposed smoothing technique (Sec. 3.4) speeds up convergence, we compare the error incurred by the stale communication between PipeGCN and PipeGCN-G/F. The error is calculated as the Frobenius-norm of the gap between the correct gradient/feature and the stale gradient/feature used in PipeGCN training. Fig. 5 compares the error at each GCN layer. We can see that the proposed smoothing technique (PipeGCNG/F) reduces the error of staleness substantially (from the base version of PipeGCN) and this benefit consistently holds across different layers in terms of both feature and gradient errors, validating the effectiveness of our smoothing method and explaining its improvement to the convergence speed.
Overfitting Mitigation. To understand the effect of staleness smoothing on model overfitting, we also evaluate the test-accuracy convergence under different decay rates γ in Fig. 6. Here ogbnproducts is adopted as the study case because the distribution of its test set largely differs from that of its training set. From Fig. 6, we observe that smoothing with a large γ (0.7/0.95) offers a fast convergence, i.e., close to the vanilla GCN training, but overfits rapidly. To understand this issue, we
further provide detailed comparisons of the errors incurred under different γ in Fig. 7. We can see that a larger γ enjoys lower approximation errors and makes the gradients/features more stable, thus improving the convergence speed. The increased stability on the training set, however, constrains the model from exploring a more general minimum point on the test set, thus leading to overfitting as the vanilla GCN training. In contrast, a small γ (0 ∼ 0.5) mitigates this overfitting and achieves a better accuracy (see Fig. 6). But a too-small γ (e.g., 0) gives a high error for both stale features and gradients (see Fig. 7), thus suffering from a slower convergence. Therefore, a trade-off between convergence speed and achievable optimality exists between different smoothing decay rates, and γ = 0.5 combines the best of both worlds in this study.
4.5 SCALING LARGE GRAPH TRAINING OVER MULTIPLE SERVERS
To further test the capability of PipeGCN, we scale up the graph size to ogbn-papers100M and train GCN over multiple GPU servers with 32 GPUs. Tab. 5 shows that even at such a large-scale setting where communication overhead dominates, PipeGCN still reduce communication time by 61%, leading to a total training time reduction of 38% compared to the vanilla GCN baseline 4.
5 CONCLUSION
In this work, we propose a new method, PipeGCN, for efficient full-graph GCN training. PipeGCN pipelines communication with computation in distributed GCN training to hide the prohibitive communication overhead. More importantly, we are the first to provide convergence analysis for GCN training with both stale features and feature gradients, and further propose a light-weight smoothing method for convergence speedup. Extensive experiments validate the advantages of PipeGCN over both vanilla GCN training (without staleness) and state-of-the-art full-graph training.
4More experiments on multi-server training can be found in Appendix E.
6 ACKNOWLEDGEMENT
The work is supported by the National Science Foundation (NSF) through the MLWiNS program (Award number: 2003137), the CC∗ Compute program (Award number: 2019007), and the NeTS program (Award number: 1801865).
A CONVERGENCE PROOF
In this section, we prove the convergence of PipeGCN. Specifically, we first figure out that when the model is updated via gradient descent, the change of intermediate features and their gradients are bounded by a constant which is proportional to learning rate η under standard assumptions. Based on this, we further demonstrate that the error occurred by the staleness is proportional to η, which guarantees that the gradient error is bounded by ηE where E is defined in Corollary A.10, and thus PipeGCN converges in O(ε− 32 ) iterations.
A.1 NOTATIONS AND ASSUMPTIONS
For a given graph G = (V, E) with an adjacency matrixA, feature matrixX , we define the propagation matrix P as P := D̃−1/2ÃD̃−1/2, where à = A+ I, D̃u,u = ∑ v Ãu,v. One GCN layer performs one step of feature propagation (Kipf & Welling, 2016) as formulated below
H(0) = X
Z(`) = PH(`−1)W (`)
H(`) = σ(Z(`))
where H(`), W (`), and Z(`) denote the embedding matrix, the trainable weight matrix, and the intermediate embedding matrix in the `-th layer, respectively, and σ denotes a non-linear activation function. For an L-layer GCN, the loss function is denoted by L(θ) where θ = vec[W (1),W (2), · · · ,W (L)]. We define the `-th layer as a function f (`)(·, ·).
f (`)(H(`−1),W (`)) := σ(PH(`−1)W (`))
Its gradient w.r.t. the input embedding matrix can be represented as
J (`−1) = ∇Hf (`)(J (`), H(`−1),W (`)) := P>M (`)[W (`)]>
and its gradient w.r.t. the weight can be represented as
G(`) = ∇W f (`)(J (`), H(`−1),W (`)) := [PH(`−1)]>M (`)
where M (`) = J (`) ◦ σ′(PH(`−1)W (`)) and ◦ denotes Hadamard product. For partition-parallel training, we can split P into two parts P = Pin+Pbd where Pin represents intrapartition propagation and Pbd denotes inter-partition propagation. For PipeGCN, we can represent one GCN layer as below
H̃(t,0) = X
Z̃(t,`) = PinH̃ (t,`−1)W̃ (t,`) + PbdH̃ (t−1,`−1)W̃ (t,`)
H̃(t,`) = σ(Z̃(t,`))
where t is the epoch number and W̃ (t,`) is the weight at epoch t layer `. We define the loss function for this setting as L̃(θ̃(t)) where θ̃(t) = vec[W̃ (t,1), W̃ (t,2), · · · , W̃ (t,L)]. We can also summarize the layer as a function f̃ (t,`)(·, ·)
f̃ (t,`)(H̃(t,`−1), W̃ (t,`)) := σ(PinH̃ (t,`−1)W̃ (t,`) + PbdH̃ (t−1,`−1)W̃ (t,`))
Note that H̃(t−1,`−1) is not a part of the input of f̃ (t,`)(·, ·) because it is a constant for the t-th epoch. The corresponding backward propagation follows the following computation
J̃ (t,`−1) = ∇H f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`))
G̃(t,`) = ∇W f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) where
M̃ (t,`) = J̃ (t,`) ◦ σ′(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))
∇H f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) := P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]>
∇W f̃ (t,`)(J̃ (t,`), H̃(t,`−1), W̃ (t,`)) := [PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`)
Again, J̃ (t−1,`) is not a part of the input of∇H f̃ (t,`)(·, ·, ·) or∇W f̃ (t,`)(·, ·, ·) because it is a constant for epoch t. Finally, we define∇L̃(θ̃(t)) = vec[G̃(t,1), G̃(t,2), · · · , G̃(t,L)]. It should be highlighted that the ‘gradient’ ∇H f̃ (t,`)(·, ·, ·), ∇W f̃ (t,`)(·, ·, ·) and ∇L̃(θ̃(t)) are not the standard gradient for the corresponding forward process due to the stale communication. Properties of gradient cannot be directly applied to these variables.
Before proceeding our proof, we make the following standard assumptions about the adopted GCN architecture and input graph. Assumption A.1. The loss function Loss(·, ·) is Closs-Lipschitz continuous and Lloss-smooth w.r.t. to the input node embedding vector, i.e., |Loss(h(L), y)− Loss(h′(L), y)| ≤ Closs‖h(L) − h′(L)‖2 and ‖∇Loss(h(L), y)−∇Loss(h′(L), y)‖2 ≤ Lloss‖h(L) − h′(L)‖2 where h is the predicted label and y is the correct label vector.
Assumption A.2. The activation function σ(·) is Cσ-Lipschitz continuous and Lσ-smooth, i.e., ‖σ(z(`))− σ(z′(`))‖2 ≤ Cσ‖z(`) − z′(`)‖2 and ‖σ′(z(`))− σ′(z′(`))‖2 ≤ Lσ‖z(`) − z′(`)‖2. Assumption A.3. For any ` ∈ [L], the norm of weight matrices, the propagation matrix, and the input feature matrix are bounded: ‖W (`)‖F ≤ BW , ‖P‖F ≤ BP , ‖X‖F ≤ BX . (This generic assumption is also used in (Chen et al., 2018; Liao et al., 2020; Garg et al., 2020; Cong et al., 2021).)
A.2 BOUNDED MATRICES AND CHANGES
Lemma A.1. For any ` ∈ [L], the Frobenius norm of node embedding matrices, gradient passing from the `-th layer node embeddings to the (`− 1)-th, gradient matrices are bounded, i.e.,
‖H(`)‖F , ‖H̃(t,`)‖F ≤ BH ,
‖J (`)‖F , ‖J̃ (t,`)‖F ≤ BJ ,
‖M (`)‖F , ‖M̃ (t,`)‖F ≤ BM ,
‖G(`)‖F , ‖G̃(t,`)‖F ≤ BG where
BH = max 1≤`≤L
(CσBPBW ) `BX
BJ = max 2≤`≤L
(CσBPBW ) L−`Closs
BM = CσBJ
BG = BPBHBM
Proof. The proof of ‖H(`)‖F ≤ BH and ‖J (`)‖F ≤ BJ can be found in Proposition 1 in (Cong et al., 2021). By induction,
‖H̃(t,`)‖F = ‖σ(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))‖F ≤ CσBW ‖Pin + Pbd‖F (CσBPBW )`−1BX ≤ (CσBPBW )`BX
‖J̃ (t,`−1)‖F = ∥∥∥P>in (J̃ (t,`) ◦ σ′(Z̃(t,`))) [W̃ (t,`)]> + P>bd (J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))) [W̃ (t−1,`)]>∥∥∥
F
≤ CσBW ‖Pin + Pbd‖F (CσBPBW )L−`Closs ≤ (CσBPBW )L−`+1Closs
‖M (`)‖F = ‖J (`) ◦ σ′(Z(`))‖F ≤ CσBJ ‖M̃ (t,`)‖F = ‖J̃ (t,`) ◦ σ′(Z̃(t,`))‖F ≤ CσBJ
G(`) = [PH(`−1)]>M (`)
≤ BPBHBM
G̃(t,`) = [PinH̃ (t,`−1) + PbdH̃ (t−1,`−1)]>M̃ (t,`)
≤ BPBHBM
Because the gradient matrices are bounded, the weight change is bounded.
Corollary A.2. For any t, `, ‖W̃ (t,`) − W̃ (t−1,`)‖F ≤ B∆W = ηBG where η is the learning rate.
Now we can analyze the changes of intermediate variables.
Lemma A.3. For any t, `, we have ‖Z̃(t,`) − Z̃(t−1,`)‖F ≤ B∆Z , ‖H̃(t,`) − H̃(t−1,`)‖F ≤ B∆H , where B∆Z = L−1∑ i=0 CiσB i+1 P B i WBHB∆W and B∆H = CσB∆Z .
Proof. When ` = 0, ‖H̃(t,0)− H̃(t−1,0)‖F = ‖X−X‖F = 0. Now we consider ` > 0 by induction.
‖Z̃(t,`) − Z̃(t−1,`)‖F =‖(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))
− (PinH̃(t−1,`−1)W̃ (t−1,`) + PbdH̃(t−2,`−1)W̃ (t−1,`))‖F =‖Pin(H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`))
+ Pbd(H̃ (t−1,`−1)W̃ (t,`) − H̃(t−2,`−1)W̃ (t−1,`))‖F
Then we analyze the bound of ‖H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`)‖F which is denoted by s(t,`).
s(t,`) ≤ ‖H̃(t,`−1)W̃ (t,`) − H̃(t,`−1)W̃ (t−1,`)‖F + ‖H̃(t,`−1)W̃ (t−1,`) − H̃(t−1,`−1)W̃ (t−1,`)‖F ≤ BH‖W̃ (t,`) − W̃ (t−1,`)‖F +BW ‖H̃(t,`−1) − H̃(t−1,`−1)‖F
According to Corollary A.2, ‖W̃ (t,`) − W̃ (t−1,`)‖F ≤ B∆W . By induction, ‖H̃(t,`−1) − H̃(t−1,`−1)‖F ≤ `−2∑ i=0 Ci+1σ B i+1 P B i WBHB∆W . Combining these inequalities,
s(t,`) ≤ BHB∆W + `−1∑ i=1 CiσB i PB i WBHB∆W
Plugging it back, we have
‖Z̃(t,`) − Z̃(t−1,`)‖F ≤‖Pin(H̃(t,`−1)W̃ (t,`) − H̃(t−1,`−1)W̃ (t−1,`))
+ Pbd(H̃ (t−1,`−1)W̃ (t,`) − H̃(t−2,`−1)W̃ (t−1,`))‖F
≤BP ( BHB∆W +
`−1∑ i=1 CiσB i PB i WBHB∆W
)
= `−1∑ i=0 CiσB i+1 P B i WBHB∆W
‖H̃(t,`) − H̃(t−1,`)‖F =‖σ(Z̃(t,`))− σ(Z̃(t−1,`))‖F ≤Cσ‖Z̃(t,`) − Z̃(t−1,`)‖F ≤CσB∆Z
Lemma A.4. ‖J̃ (t,`) − J̃ (t−1,`)‖F ≤ B∆J where
B∆J = max 2≤`≤L
(BPBWCσ) L−`B∆HLloss + (BMB∆W + LσBJB∆ZBW ) L−3∑ i=0 Bi+1P B i WC i σ
Proof. For the last layer (` = L), ‖J̃ (t,L)− J̃ (t−1,L)‖F ≤ Lloss‖H̃(t,L)− H̃(t−1,L)‖F ≤ LlossB∆H . For the case of ` < L, we prove the lemma by using induction.
‖J̃ (t,`−1) − J̃ (t−1,`−1)‖F = ∥∥∥(P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]>) − ( P>inM̃ (t−1,`)[W̃ (t−1,`)]> + P>bdM̃ (t−2,`)[W̃ (t−2,`)]> )∥∥∥ F
≤ ∥∥∥P>in (M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>)∥∥∥
F + ∥∥∥P>bd (M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−2,`)[W̃ (t−2,`)]>)∥∥∥
F
We denote ∥∥∥M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>∥∥∥
F by s(t,`) and analyze its bound.
s(t,`) ≤ ∥∥∥M̃ (t,`)[W̃ (t,`)]> − M̃ (t,`)[W̃ (t−1,`)]>∥∥∥
F + ∥∥∥M̃ (t,`)[W̃ (t−1,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>∥∥∥
F ≤BM ∥∥∥[W̃ (t,`)]> − [W̃ (t−1,`)]>∥∥∥
F +BW ∥∥∥M̃ (t,`) − M̃ (t−1,`)∥∥∥ F
According to Corollary A.2, ∥∥∥[W̃ (t,`)]> − [W̃ (t−1,`)]>∥∥∥
F ≤ B∆W . For the second term,
‖M̃ (t,`) − M̃ (t−1,`)‖F =‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))‖F ≤‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t,`) ◦ σ′(Z̃(t−1,`))‖F + ‖J̃ (t,`) ◦ σ′(Z̃(t−1,`))− J̃ (t−1,`) ◦ σ′(Z̃(t−1,`))‖F ≤BJ‖σ′(Z̃(t,`))− σ′(Z̃(t−1,`))‖F + Cσ‖J̃ (t,`) − J̃ (t−1,`)‖F (5)
According to the smoothness of σ and Lemma A.3, ‖σ′(Z̃(t,`)) − σ′(Z̃(t−1,`))‖F ≤ LσB∆Z . By induction,
‖J̃ (t,`) − J̃ (t−1,`)‖F
≤ (BPBWCσ)(L−`)B∆HLloss + (BMB∆W + LσBJB∆ZBW ) L−`−1∑ i=0 Bi+1P B i WC i σ
As a result,
s(t,`) ≤BMB∆W +BWBJLσB∆Z +BWCσ‖J̃ (t,`) − J̃ (t−1,`)‖F =(BMB∆W +BWBJLσB∆Z) +B (L−`) P B (L−`+1) W C (L−`+1) σ B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=1 BiPB i WC i σ
≤B(L−`)P B (L−`+1) W C (L−`+1) σ B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=0 BiPB i WC i σ
‖J̃ (t,`−1) − J̃ (t−1,`−1)‖F = ∥∥∥P>in (M̃ (t,`)[W̃ (t,`)]> − M̃ (t−1,`)[W̃ (t−1,`)]>)∥∥∥
F + ∥∥∥P>bd (M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−2,`)[W̃ (t−2,`)]>)∥∥∥
F
≤BP s(t,`)
≤(BPBWCσ)(L−`+1)B∆HLloss
+ (BMB∆W + LσBJB∆ZBW ) L−∑̀ i=0 Bi+1P B i WC i σ
From Equation 5, we can also conclude that
Corollary A.5. ‖M̃ (t,`) − M̃ (t−1,`)‖F ≤ B∆M with B∆M = BJLσB∆Z + CσB∆J .
A.3 BOUNDED FEATURE ERROR AND GRADIENT ERROR
In this subsection, we compare the difference between generic GCN and PipeGCN with the same parameter set, i.e., θ = θ̃(t).
Lemma A.6. ‖Z̃(t,`)−Z(`)‖F ≤ EZ ,‖H̃(t,`)−H(`)‖F ≤ EH whereEZ = B∆H L∑ i=1 Ci−1σ B i WB i P
and EH = B∆H L∑ i=1 (CσBWBP ) i.
Proof.
‖Z̃(t,`) − Z(`)‖F = ‖(PinH̃(t,`−1)W̃ (t,`) + PbdH̃(t−1,`−1)W̃ (t,`))− (PH(`−1)W (`))‖F ≤ ‖(PinH̃(t,`−1) + PbdH̃(t−1,`−1) − PH(`−1))W (`)‖F = BW ‖P (H̃(t,`−1) −H(`−1)) + Pbd(H̃(t−1,`−1) − H̃(t,`−1))‖F
≤ BWBP ( ‖H̃(t,`−1) −H(`−1)‖F +B∆H ) By induction, we assume that ‖H̃(t,`−1) −H(`−1)‖F ≤ B∆H
`−1∑ i=1 (CσBWBP ) i. Therefore,
‖Z̃(t,`) − Z(`)‖F ≤ BWBPB∆H `−1∑ i=0 (CσBWBP ) i
= B∆H ∑̀ i=1 Ci−1σ B i WB i P
‖H̃(t,`) −H(`)‖F = ‖σ(Z̃(t,`))− σ(Z(`))‖F ≤ Cσ‖Z̃(t,`) − Z(`)‖F
≤ B∆H ∑̀ i=1 (CσBWBP ) i
Lemma A.7. ‖J̃ (t,`) − J (`)‖F ≤ EJ and ‖M̃ (t,`) −M (`)‖F ≤ EM with
EJ = max 2≤`≤L
(BPBWCσ) L−`LlossEH+BP (BW (BJEZLσ+B∆M )+B∆WBM ) L−3∑ i=0 (BPBWCσ) i
EM = CσEJ + LσBJEZ
Proof. When ` = L, ‖J̃ (t,L) − J (L)‖F ≤ LlossEH . For any `, we assume that
‖J̃ (t,`) − J (`)‖F ≤ (BPBWCσ)L−`LlossEH + U L−`−1∑ i=0 (BPBWCσ) i (6)
‖M̃ (t,`) −M (`)‖F ≤ (BPBWCσ)L−`CσLlossEH + UCσ L−`−1∑ i=0 (BPBWCσ) i + LσBJEZ (7)
where U = BP (BWBJEZLσ +B∆WBM +BWB∆M ). We prove them by induction as follows.
‖M̃ (t,`) −M (`)‖F = ‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J (`) ◦ σ′(Z(`))‖F ≤ ‖J̃ (t,`) ◦ σ′(Z̃(t,`))− J̃ (t,`) ◦ σ′(Z(`))‖F + ‖J̃ (t,`) ◦ σ′(Z(`))− J (`) ◦ σ′(Z(`))‖F ≤ BJ‖σ′(Z̃(t,`))− σ′(Z(`))‖F + Cσ‖J̃ (t,`) − J (`)‖F
Here ‖σ′(Z̃(t,`))− σ′(Z(`))‖F ≤ LσEZ . With Equation 6,
‖M̃ (t,`) −M (`)‖F ≤ (BPBWCσ)L−`CσLlossEH + UCσ L−`−1∑ i=0 (BPBWCσ) i + LσBJEZ
On the other hand, ‖J̃ (t,`−1) − J (`−1)‖F = ‖P>inM̃ (t,`)[W̃ (t,`)]> + P>bdM̃ (t−1,`)[W̃ (t−1,`)]> − P>M (`)[W (`)]>‖F = ‖P>(M̃ (t,`) −M (`))[W (`)]> + P>bd(M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>)‖F ≤ ‖P>(M̃ (t,`) −M (`))[W (`)]>‖F + ‖P>bd(M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>)‖F ≤ BPBW ‖M̃ (t,`) −M (`)‖F +BP ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F
The first part is bounded by Equation 7. For the second part,
‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F ≤ ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t−1,`)[W̃ (t,`)]>‖F + ‖M̃ (t−1,`)[W̃ (t,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F ≤ B∆WBM +BWB∆M
Therefore, ‖J̃ (t,`−1) − J (`−1)‖F ≤ BPBW ‖M̃ (t,`) −M (`)‖F +BP ‖M̃ (t−1,`)[W̃ (t−1,`)]> − M̃ (t,`)[W̃ (t,`)]>‖F
≤ (BPBWCσ)L−`+1LlossEH + U L−∑̀ i=1 (BPBWCσ) i + U
= (BPBWCσ) L−`+1LlossEH + U L−∑̀ i=0 (BPBWCσ) i
Lemma A.8. ‖G̃(t,`) −G(`)‖F ≤ EG where EG = BP (BHEM +BMEH)
Proof.
‖G̃(t,`) −G(`)‖F = ∥∥∥[PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`) − [PH(`)]>M (`)∥∥∥
F ≤ ∥∥∥[PinH̃(t,`−1) + PbdH̃(t−1,`−1)]>M̃ (t,`) − [PH(`−1)]>M̃ (t,`)∥∥∥
F + ∥∥∥[PH(`−1)]>M̃ (t,`) − [PH(`−1)]>M (`)∥∥∥
F
≤BM (‖P (H̃(t,`−1) −H(`−1)) + Pbd(H̃(t−1,`−1) − H̃(t,`−1))‖F ) +BPBHEM ≤BMBP (EH +B∆H) +BPBHEM
By summing up from ` = 1 to ` = L to both sides, we have
Corollary A.9. ‖∇L̃(θ)−∇L(θ)‖2 ≤ Eloss where Eloss = LEG.
According to the derivation of Eloss, we observe that Eloss contains a factor η. To simplify the expression of Eloss, we assume that BPBWCσ ≤ 12 without loss of generality, and rewrite Corollary A.9 as the following.
Corollary A.10. ‖∇L̃(θ)−∇L(θ)‖2 ≤ ηE where
E = 1
8 LB3PB 2 XClossCσ
( 3BXC 2 σLloss + 6BXClossLσ + 10ClossC 2 σ ) A.4 PROOF OF THE MAIN THEOREM
We first introduce a lemma before the proof of our main theorem. Lemma A.11 (Lemma 1 in (Cong et al., 2021)). An L-layer GCN is Lf -Lipschitz smoothness, i.e., ‖∇L(θ1)−∇L(θ2)‖2 ≤ Lf‖θ1 − θ2‖2.
Now we prove the main theorem. Theorem A.12 (Convergence of PipeGCN, formal). Under Assumptions A.1, A.2, and A.3, we can derive the following by choosing a learning rate η = √ ε E and number of training iterations T = (L(θ(1))− L(θ∗))Eε− 32 : 1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ 3ε
where E is defined in Corollary A.10, ε > 0 is an arbitrarily small constant, L(·) is the loss function, θ(t) and θ∗ represent the parameter vector at iteration t and the optimal parameter respectively.
Proof. With the smoothness of the model, L(θ(t+1)) ≤ L(θ(t)) + 〈 ∇L(θ(t)), θ(t+1) − θ(t) 〉 + Lf 2 ‖θ(t+1) − θ(t)‖22
= L(θ(t))− η 〈 ∇L(θ(t)),∇L̃(θ(t)) 〉 + η2Lf 2 ‖∇L̃(θ(t))‖22
Let δ(t) = ∇L̃(θ(t))−∇L(θ(t)) and η ≤ 1/Lf , we have L(θ(t+1)) ≤ L(θ(t))− η 〈 ∇L(θ(t)),∇L(θ(t)) + δ(t) 〉 + η
2 ‖∇L(θ(t)) + δ(t)‖22
≤ L(θ(t))− η 2 ‖∇L(θ(t))‖22 + η 2 ‖δ(t)‖22
From Corollary A.10 we know that ‖δ(t)‖2 < ηE. After rearranging the terms,
‖∇L(θ(t))‖22 ≤ 2
η (L(θ(t))− L(θ(t+1))) + η2E2
Summing up from t = 1 to T and taking the average,
1
T T∑ t=1 ‖∇L(θ(t))‖22 ≤ 2 ηT (L(θ(1))− L(θ(T+1))) + η2E2
≤ 2 ηT (L(θ(1))− L(θ∗)) + η2E2
where θ∗ is the minimum point of L(·). By taking η = √ ε E and T = (L(θ
(1))− L(θ∗))Eε− 32 with an arbitrarily small constant ε > 0, we have
1
T T∑ t=1 ‖∇L(θ(t))‖2 ≤ 3ε
B TRAINING TIME BREAKDOWN OF FULL-GRAPH TRAINING METHODS
To understand why PipeGCN significantly boosts the training throughput over full-graph training methods, we provide the detailed time breakdown in Tab. 6 using the same model as Tab. 3 (4-layer GraphSAGE, 256 hidden units), in which “GCN” denotes the vanilla partition-parallel training illustrated in Fig. 1(a). We observe that PipeGCN greatly saves communication time.
C TRAINING TIME IMPROVEMENT BREAKDOWN OF PIPEGCN
To understand the training time improvement offered by PipeGCN, we further breakdown the epoch time into three parts (intra-partition computation, inter-partition communication, and reduce for aggregating model gradient) and provide the result in Fig. 8. We can observe that: 1) interpartition communication dominates the training time in vanilla partition-parallel training (GCN); 2) PipeGCN (with or without smoothing) greatly hides the communication overhead across different number of partitions and all datasets, e.g., the communication time is hidden completely in 2-partition Reddit and almost completely in 3-partition Yelp, thus the substantial reduction in training time; and 3) the proposed smoothing incurs only minimal overhead (i.e., minor difference between PipeGCN and PipeGCN-GF). Lastly, we also notice that when communication ratio is extremely large (85%+), PipeGCN hides communication significantly but not completely (e.g., 10-partition ogbn-products), in which case we can employ those compression and quantization techniques (Alistarh et al. (2017); Seide et al. (2014); Wen et al. (2017); Li et al. (2018a); Yu et al. (2018)) from the area of general distributed SGD for further reducing the communication, as the compression is orthogonal to the pipeline method. Besides compression, we can also increase the pipeline depth of PipeGCN, e.g., using two iterations of compute to hide one iteration of communication, which is left to our future work.
D MAINTAINING CONVERGENCE SPEED (ADDITIONAL EXPERIMENTS)
We provide the additional convergence curves on Yelp in Fig. 9. We can see that PipeGCN and its variants maintain the convergence speed w.r.t the number of epochs while substantially reducing the end-to-end training time.
E SCALING GCN TRAINING OVER MULTIPLE GPU SERVERS
We also scale up PipeGCN training over multiple GPU servers (each contains AMD Radeon Instinct MI60 GPUs, an AMD EPYC 7642 CPU, and 48 lane PCI 3.0 connecting CPU-GPU and GPU-GPU) networked with 10Gbps Ethernet.
The accuracy results of PipeGCN and its variants are summarized in Tab. 7:
Furthermore, we provide PipeGCN’s speedup against vanilla partition-parallel training in Tab. 8:
From the two tables above, we can observe that our PipeGCN family consistently maintains the accuracy of the full-graph training, while improving the throughput by 15%∼66% regardless of the machine settings and number of partitions.
F IMPLEMENTATION DETAILS
We discuss the details of the effective and efficient implementation of PipeGCN in this section.
First, for parallel communication and computation, a second cudaStream is required for communication besides the default cudaStream for computation. To also save memory buffers for communication, we batch all communication (e.g., from different layers) into this second cudaStream. When the popular communication backend, Gloo, is used, we parallelize the CPU-GPU transfer with CPU-CPU transfer.
Second, when Dropout layer is used in GCN model, it should be applied after communication. The implementation of the dropout layer for PipeGCN should be considered carefully so that the dropout mask remains consistent for the input tensor and corresponding gradient. If the input feature passes through the dropout layer before being communicated, during the backward phase, the dropout mask is changed and the gradient of masked values is involved in the computation, which introduces noise to the calculation of followup gradients. As a result, the dropout layer can only be applied after receiving boundary features. | 1. What is the focus of the paper regarding GNNs?
2. What are the strengths of the proposed method, particularly its simplicity and stability?
3. What are the weaknesses of the paper, especially regarding its comparison with other works and assumptions?
4. How does the reviewer assess the significance of the method's ability to converge?
5. Do you have any questions about the experimental setup or results? | Summary Of The Paper
Review | Summary Of The Paper
A method of partitioning GNNs, and using stale versions of activations across partitions. This allows activations to be communicated in parallel (rather than sequentially) across GPUs, which improves throughput. Theoretical results show that the stale activations still result in convergence, and empirically the method appears to perform better than the rate given by the theory.
Review
Strengths
Simple method that is relatively easy to implement compared to other parallel GNN systems. Does not require complex memory management (e.g. involving CPU memory). That makes it potentially easy to extend to a distributed network setting (multiple machines).
The staleness is in activations/gradients, not weights. Intuitively, this leads to more stable convergence (vs stale weights) because (1) the stale activations/gradients only occur on the partition boundaries; (2) the activation/gradient for a neuron on the boundary is a combination of stale (inter-partition) and non-stale (intra-partition) activations/gradients
I did not evaluate the theoretical proofs or convergence rate for mathematical correctness, but the fact that the method converges at all is intuitively reasonable, given that stale gradient systems are already known to converge.
Weaknesses
It is rather surprising that vanilla partition parallel, a simple baseline that many other GNN systems are adapted from, beats recently published (2020) methods by a very wide margin, such as ROC and CAGNET. Although the authors did give some analysis as to why this is the case, I am not convinced those could be the major factors. It is more likely that the baselines have been misconfigured. For the avoidance of doubt, the authors should provide numerical evidence that their obtained results are consistent with the results reported in the ROC and CAGNET papers.
The method relies on a somewhat brittle assumption that inter-partition communication time is roughly equal to activation/gradient computation time. If the communication time for a given model and partitioning were significantly smaller or larger than the computation time, the throughput gain from parallelizing computation with communication would be much less impressive. Ultimately, the method does not truly solve the issue of overwhelming communication volume - which sampling-based methods do address. The paper should acknowledge this limitation, rather than claiming superiority in all scenarios to sampling based methods.
Minor issues and suggestions
The authors should not use the word "distributed" unless the implementation and experiments support networked machines. The hardware configuration in the experiments is a single machine with multiple GPUs.
It is interesting that the experiments were performed on commodity hardware with what I understand to be no NVLink, and only PCIe v3 x16 connections. The bandwidth of PCIe v3 connections is roughly comparable to 100-200Gbps network connections. I take that as a promising sign that the method will continue to perform well with distributed network machines. |
ICLR | Title
Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation
Abstract
Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, “conceptor-aided backprop” (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophic forgetting. CAB extends these results to deep feedforward networks. On the disjoint and permuted MNIST tasks, CAB outperforms two other methods for coping with catastrophic interference that have recently been proposed.
N/A
Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, “conceptor-aided backprop” (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophic forgetting. CAB extends these results to deep feedforward networks. On the disjoint and permuted MNIST tasks, CAB outperforms two other methods for coping with catastrophic interference that have recently been proposed.
1 INTRODUCTION
Agents with general artificial intelligence are supposed to learn and perform well on multiple tasks. Continual learning refers to the scenarios where a machine learning system can retain previously acquired skills while learning new ones. However, when trained on a sequence of tasks, neural networks usually forget about previous tasks after their weights are adjusted for a new task. This notorious problem known as catastrophic interference (CI) (McCloskey & Cohen, 1989; Ratcliff, 1990; French, 1999; Kumaran et al., 2016) poses a serious challenge towards continual learning.
Many approaches have been proposed to overcome or mitigate the problem of CI in the last three decades (Hinton & Plaut, 1987; French, 1991; Ans & Rousset, 1997; French, 1997; Srivastava et al., 2014). Especially recently, an avalanche of new methods in the deep learning field has brought about dramatic improvements in continual learning in neural networks. Kirkpatrick et al. (2017) introduced a regularization-based method called elastic weight consolidation (EWC), which uses the posterior distribution of parameters for the old tasks as a prior for the new task. They approximated the posterior by a Gaussian distribution with the parameters for old tasks as the mean and the inverse diagonal of the Fisher information matrix as the variance. Lee et al. (2017) introduced two incremental moment matching (IMM) methods called mean-IMM and mode-IMM. Mean-IMM approximates the distribution of parameters for both old and new tasks by a Gaussian distribution, which is estimated by minimizing its KL-divergence from the mixture of two Gaussian posteriors, one for the old task and the other one for the new task. Mode-IMM estimates the mode of this mixture of two Gaussians and uses it as the optimal parameters for both tasks.
In the field of Reservoir Computing (Jaeger, 2001; Maass et al., 2002), an effective solution to CI using conceptors was proposed by Jaeger (2014) to incrementally train a recurrent neural network to generate spatial-temporal signals. Conceptors are a general-purpose neuro-computational mechanism that can be used in a diversity of neural information processing tasks including temporal pattern classification, one-shot learning, human motion pattern generation, de-noising and signal separation (Jaeger, 2017). In this paper, we adopt and extend the method introduced in Jaeger (2014) and propose a conceptor-aided backpropagation (CAB) algorithm to train feed-forward networks. For each layer of a network, CAB computes a conceptor to characterize the linear subspace spanned by the neural activations in that layer that have appeared in already learned tasks. When the network is trained on a new task, CAB uses the conceptor to adjust the gradients given by backpropagation so that the linear transformation restricted to the characterized subspace will be preserved after the
gradient descent procedure. Experiment results of two benchmark tests showed highly competitive performance of CAB.
The rest of this paper is structured as follows. Section 2 introduces conceptors and their application to incremental learning by ridge regression. Section 3 extends the method to stochastic gradient descent and describes the CAB algorithm. Section 4 compares its performance on the permuted and disjoint MNIST tasks to recent methods that address the same problem. Finally we conclude our paper in Section 5.
2 INCREMENTAL RIDGE REGRESSION BY CONCEPTORS
This section reviews the basics of conceptor theory and its application to incrementally training linear readouts of recurrent neural networks as used in reservoir computing. A comprehensive treatment can be found in (Jaeger, 2014).
2.1 CONCEPTORS
In brief, a matrix conceptor C for some vector-valued random variable x ∈ RN is defined as a linear transformation that minimizes the following loss function.
Ex[||x− Cx||2] + α−2||C||2fro (1) where α is a control parameter called aperture and || · ||fro is the Frobenius norm. This optimization problem has a closed-form solution
C = R(R+ α−2I)−1 (2)
where R = Ex[xx>] is the N ×N correlation matrix of x, and I is the N ×N identity matrix. This result given in (2) can be understood by studying the singular value decomposition (SVD) of C. If R = UΣU> is the SVD of R, then the SVD of C is given as USU>, where the singular values si of C can be written in terms of the singular values σi of R: si = σi/(σi + α−2) ∈ [0, 1). In intuitive terms, C is a soft projection matrix on the linear subspace where the samples of x lie. For a vector y in this subspace, C acts like the identity: Cy ≈ y, and when some noise orthogonal to the subspace is added to y, C de-noises: C(y+ ) ≈ y. Figure 1 shows the ellipsoids corresponding to three sets of R3 points. We define the quota Q(C) of a conceptor to be the mean singular values: Q(C) := 1N ∑N i=1 si. Intuitively, the quota measures the fraction of the total dimensions of the entire vector space that is claimed by C.
Moreover, logic operations that satisfy most laws of Boolean logic can be defined on matrix conceptors as the following:
¬C :=I − C, (3) Ci ∨ Cj :=(Ri +Rj)(Ri +Rj + α−2I)−1 (4) Ci ∧ Cj :=¬(¬Ci ∨ ¬Cj) (5)
where ¬C softly projects onto a linear subspace that can be roughly understood as the orthogonal complement of the subspace characterized by C. Ci∨Cj is the conceptor computed from the union of the two sets of sample points from which Ci and Cj are computed. It describes a space that is approximately the sum of linear subspaces characterized by Ci and Cj , respectively. The definition of Ci ∧ Cj reflects de Morgan’s law. Figure 2 illustrates the geometry of these operations.
2.2 INCREMENTAL RIDGE REGRESSION
This subsection explains how conceptors can be applied to master continual learning in a simple linear model trained on a supervised task by ridge regression. The training is done sequentially on multiple input-to-output mapping tasks. This simplified scenario illustrates the working principle of continual learning with conceptors and will later be used repeatedly as a sub-procedure in the CAB algorithm for training multilayer feed-forward networks.
Consider a sequence of m incoming tasks indexed by j. We denote the training dataset for the j-th task by {(xj1, y j 1), · · · , (xjn, yjn)}, where x j i ∈ RN are input vectors and y j i ∈ RM their corresponding target outputs. Whenever the training dataset for a new task is available, the incremental learning method will compute a matrix conceptor Cj for the input variable of the new task using Equation 2 and update the linear model, resulting in a sequence of linear models W 1, . . .Wm such that W j solves not only the j-th task but also all previous tasks: for k ≤ j, yk ≈W jxk. The conceptor Cj is a soft projection matrix onto the linear subspace spanned by input patterns from the j-th task. Then, Aj−1 = C1 ∨ · · · ∨Cj−1 characterizes the memory space already claimed by the tasks 1, . . . , j − 1 and F j = ¬Aj−1, the orthogonal complement of Aj − 1, represents the memory space still free for the j-th task. Here “memory space” refers to the linear space of input vectors. In detail, this method proceeds in the following way:
• Initialization (no task trained yet): W 0 = 0M×N , A0 = 0N×N . • Incremental task learning: For tasks j = 1, . . . ,m do:
1. Store the input vectors from the j-th training dataset of size n into aN ×n sized input collection matrixXj , and store the output vectors into aM×n sized output collection matrix Y j . 2. Compute the conceptor for this task by Cj = Rj(Rj + α−2I)−1, where Rj = 1 nX jXj> 3. Train an increment matrix W jinc (to be added to W j−1, yielding W j), with the crucial
aid of a helper conceptor F j : (a) F j := ¬Aj−1 (comment: this conceptor characterizes the “still disposable”
memory space for the j-th task), (b) T := Y j−(W j−1Xj) (comment: this matrix consists of target values for a linear
regression to compute W jinc), (c) S := F jXj (comment: this matrix consists of input arguments for the linear
regression),
(d) W jinc = ((SS >/n + λ−2I)−1ST>/n)> (comment: carry out the regression,
regularized by λ−2), 4. Update W j : W j = W j−1 +W jinc. 5. Update A : Aj = Aj−1 ∨Cj (comment: this is possible due to the associativity of the ∨ operation on conceptors)
The weight increment W jinc does not interfere much with the previously learned weights W j−1 because the regularization in step 3(d) constrains the row space of W jinc to be only the linear subspace spanned by input arguments defined in 3(c), which are inside the kernel of W j−1 due to the projection by F j . Intuitively speaking, when learning a new task, this algorithm exploits only the components of input vectors in the still unused space (kernel of W j−1, characterized by F j) to compensate errors for the new task and leaves the directions in the already used memory space (row space of W j−1, characterized by Aj−1) intact.
3 CONCEPTOR-AIDED SGD AND BACK-PROP
In this section, we first derive a stochastic gradient descent version of the algorithm described in the previous section, then present the procedure of CAB.
3.1 SGD
In the algorithm introduced in the previous section, W jinc is computed by ridge regression, which offers a closed-form solution to minimize the following cost function
J (W jinc) := E[|W j incs− t| 2] + λ−2|W jinc| 2 fro (6)
where t = yj − W j−1xj , s = F jxj . One can also minimize this cost function by stochastic gradient descent (SGD), which starts from an initial guess of W jinc and repeatedly performs the following update
W jinc ←W j inc − η∇W jincJ (W j inc) (7)
where η is the learning rate and the gradient is given by:
∇W jincJ (W j inc) = 2E[(W j incs− t)s >] + 2λ−2W jinc (8)
Substituting t by yj −W j−1xj and s by F jxj = (I −Aj−1)xj in (8), we get
∇W jincJ (W j inc) = 2E[(W j inc(I −A j−1)xj − yj +W j−1xj)s>] + 2λ−2W jinc (9)
= 2E[(−W jincA j−1xj + (W j−1 +W jinc)x j − yj)s>] + 2λ−2W jinc (10)
Due to the regularization term in the cost function, as the optimization goes on, eventually Winc will null the input components that are not inside the linear subspace characterized by F j , hence W jincA
j−1xj will converge to 0 as the algorithm proceeds. In addition, since W j = W j−1 +W jinc, (10) can be simplified to
∇W jincJ (W j inc) = 2E[(W jxj − yj)s>] + 2λ−2W jinc (11)
Adding W j−1 to both sides of (7), we obtain the update rule for W j :
W j ←W j − 2ηE[es>] + 2ηλ−2W jinc (12)
where e := W jxj − yj . In practice, at every iteration, the expected value can be approximated by a mini-batch of size nB , indexed by iB :
Ê[es>] = 1
nB L∑ iB=0 (W jxjiB − y j iB )(F jxjiB ) > = 1 nB L∑ iB=0 (W jxjiB − y j iB )xj>iB F j (13)
where the transpose for F j can be dropped since it is symmetric.
If we only train the j−th task without considering the previous tasks, the update rule given by normal SGD is
W j ←W j − 2ηE[exj>] + 2ηλ−2W j (14)
Comparing this to the update rule in (12), we notice two modifications when a conceptor is adopted to avoid CI: first, the gradient of weights are calculated using the conceptor-projected input vector s = F jxj instead of the original input vector xj ; second, regularization is done on the weight increment W jinc rather than the final weight W
j . These two modifications lead to our design of the conceptor-aided algorithm for training multilayer feed-forward networks.
3.2 BACKPROP
The basic idea of CAB is to guide the gradients of the loss function on every linear component of the network by a matrix conceptor computed from previous tasks during error back-propagation (Rumelhart et al., 1986), repeatedly applying the conceptor-aided SGD technique introduced in the previous section in every layer.
Consider a feed-forward network with L + 1 layers, indexed by l = 0, . . . L, such that the 0-th and the L-th layers are the input and output layers respectively. W (l) represents the linear connections between the (l− 1)-th and the l-th layer, where we refer to the former as the pre-synaptic layer with respect to W (l), and to the latter as the post-synaptic layer. We denote by N (l) the size of the l-th layer (excluding the bias unit) and A(l) j a conceptor characterizing the memory space in the l-th layer used up by the first j tasks. Let σ(·) be the activation function of the nonlinear neurons and θ all the parameters of the network to be trained. Then the incremental training method with CAB proceeds as follows:
• Initialization (no task trained yet): ∀l = 0, . . . , L− 1, A(l)0 := 0(N(l)+1)×(N(l)+1), and randomly initialize W (l+1) 0 to be a matrix of size N (l+1) × (N (l) + 1).
• Incremental task learning: For j = 1, . . . ,m do:
1. ∀l = 0, . . . , L−1, F (l)j = ¬A(l)(j−1). (This conceptor characterizes the still disposable vector space in layer l for learning task j)
2. Update the network parameters θ(j−1) obtained after training the first j − 1 tasks to θj by stochastic gradient descent, where the gradients are computed by CAB instead of the classical backprop. Algorithms 1 and 2 detail the forward and backward pass of CAB, respectively. Different from classical backprop, the gradients are guided by a matrix conceptor F (l) j , such that in each layer only the activity in the still disposable
memory space will contribute to the gradient. Note that the conceptors remain the same until convergence of the network for task j.
3. After training on the j-th task, run the forward procedure again on a batch of nB input
vectors, indexed by iB , taken from the j-th training dataset, to collect activations h (l) iB
j
of each layer into a N (l) × nB sized matrix H(l) j , and set the correlation matrix R(l) j
= 1nBH (l)j(H(l) j )>.
4. Compute a conceptor on the l-th layer for the j-th pattern by C(l) j = R(l) j (R(l) j +
α−2IN(l)×N(l)) −1,∀l = 0, . . . , L − 1. Finding an optimal aperture can be done by a cross-validation search1.
5. Update the conceptor for already used space in every layer: A(l) j = A(l) j ∨
C(l) j ,∀l = 0, . . . , L− 1.
Algorithm 1 The forward procedure of conceptor-aided backprop, adapted from the traditional backprop. Input vectors are passed through a feed-forward network to compute the cost function. L(ŷj , yj) denotes the loss for the j-th task, to which a regularizer Ω(θjinc) = Ω(θj − θj−1) = ||θj − θj−1||2fro is added to obtain the total cost J , where θ contains all the weights (biases are considered as weights connected to the bias units). The increment of parameters rather than the parameters themselves are regularized, similar to the conceptor-aided SGD. Require: Network depth, l Require: W (l)j , l ∈ {1, . . . , L}, the weight matrices of the network Require: xj , one input vector of the j-th task Require: yj , the target output for xj
1: h(0) = xj 2: for l = 1, . . . L do 3: b(l) = [h(l−1)>, 1]>, include the bias unit 4: a(l) = W (l) j b(l) 5: h(l) = σ(a(l)) 6: end for 7: ŷj = h(l) 8: J = L(ŷj , yj) + λΩ(θjinc)
Algorithm 2 The backward procedure of conceptor-aided backprop for the j-th task, adapted from the traditional backprop. The gradient g of the loss function L on the activations a(l) represents the error for the linear transformation W (l) j between the (l − 1)-th and the l−th layers. In the standard backprop algorithm, the gradient of L on W (l)j is computed as an outer product of the post-synaptic errors g and the pre-synaptic activities h(l−1). This resembles the computation of the gradient in the linear SGD algorithm, which motivates us to apply conceptors in a similar fashion as in the conceptor-aided SGD. Specifically, we project the gradient ∇W (l)jL by the matrix conceptor F (l−1) j that indicates the free memory space on the pre-synaptic layer.
1:
g ← ∇ŷJ = ∇ŷL(ŷ, y)
2: for l = L,L− 1, . . . , 1 do 3: Convert the gradient on the layer’s output into a gradient on the pre-nonlinearity activation
( denotes element-wise multiplication):
g ← ∇a(l)J = g σ′(a(l))
4: Compute the gradient of weights, project it by F (l−1) j , and add it to the regularization term
on the increment:
∇W (l)jJ =g(F (l−1)jb(l−1))> + λ∇W (l)jΩ(θ j inc) = gb (l−1)>F (l−1) j + 2λW (l) inc j
=gb(l−1) > F (l−1) j + 2λ(W (l) j −W (l) j−1 )
5: Propagate the gradients w.r.t. the next lower-level hidden layers activations:
g ← ∇h(l−1)J = W (l) j> g
6: end for
4 EXPERIMENTS
4.1 PERMUTED MNIST EXPERIMENT
To test the performance of CAB, we evaluated it on the permuted MNIST experiment (Srivastava et al., 2013; Goodfellow et al., 2014; Kirkpatrick et al., 2017; Lee et al., 2017), where a sequence of pattern recognition tasks are created from the MNIST dataset (LeCun et al., 1998). For each task, a random permutation of input image pixels is generated and applied to all images in MNIST to obtain a new shuffled dataset, equally difficult to recognize as the original one, the objective of each task is to recognize these images with shuffled pixels.
For a proof-of-concept demonstration, we trained a simple but sufficient feed-forward network with [784-100-10] of neurons to classify 10 permuted MNIST datasets. The network has logistic sigmoid neurons in both hidden and output layers, and is trained with mean squared error as the cost function. Vanilla SGD was used in all experiments to optimize the cost function. Learning rate and aperture were set to 0.1 and 4, respectively. For comparison, we also tested EWC on the same task with the same network architecture, based on the implementation by Seff (2017). The parameters chosen for the EWC algorithm were 0.01 for the learning rate and 15 for the weight of the Fisher penalty term. Figure 3 shows the performance of CAB on this task, the average testing accuracy is 95.2% after learning all 10 tasks sequentially. Although a fair amount of effort was spent on searching for optimal parameters for EWC, the accuracies shown here might still not reflect its best performance. However, the same experiment with EWC was also conducted in Kemker et al. (2017), where the authors reimplemented EWC on a network with higher capacity (2 hidden layers and 400 ReLU neurons per layer) and the resulting average accuracy after learning 10 tasks sequentially was shown to be around 93%.
Since all tasks are generated by permuting the same dataset, the portion of the input space occupied by each of them should have the same size. However, as more tasks are learned, the chance that the space of a new task will overlap with the already used input space increases. Figure 4 shows the singular value spectra and quota of the input and hidden layer conceptors every time after a new task is learned. As the incremental learning proceeds, it becomes less likely for a new task to be in the free space. For example, the second task increases the quota of the input layer memory space by 0.1, whereas the 10th task increases it by only 0.03. However, CAB still manages to make the network learn new tasks based on their input components in the non-overlapping space.
1Jaeger (2014) proposes a number of methods for analytical aperture optimization. It remains for future work to determine how these methods transfer to our situation.
4.2 DISJOINT MNIST EXPERIMENT
We then applied CAB to categorize the disjoint MNIST datasets into 10 classes (Srivastava et al., 2013; Lee et al., 2017). In this experiment, the original MNIST dataset is divided into two disjoint datasets with the first one consisting of data for the first five digits (0 to 4), and the second one of the remaining five digits (5 to 9). This task requires a network to learn these two datasets one after the other, then examines its performance of classifying the entire MNIST testing images into 10 classes. The current state-of-the-art accuracy on this task, averaged over 10 learning trials, is 94.12(±0.27)%, achieved by Lee et al. (2017) using IMM. They also tested EWC on the same task and the average accuracy was 52.72(±1.36)%. To test our method, we trained a feed-forward network with [784-800-10] neurons. Logistic sigmoid nonlinearities were used in both hidden and output layers, and the network was trained with vanilla SGD to minimize mean squared errors. The aperture α = 9 was used for all conceptors on all layers, learning rate η and regularization coefficient λ were chosen to be 0.1 and 0.005 respectively. The accuracy of CAB on this task, measured by repeating the experiment 10 times, is 94.91(±0.30)%. It is worth mentioning that the network used by Lee et al. (2017) for testing IMM and EWC had [784-800-800-10] rectified linear units (ReLU), so CAB achieved better performance with fewer layers and neurons.
4.3 COMPUTATIONAL COST
If a conceptor is computed by ridge regression, the time complexity is O(nN2 + N3) when the design matrix is dense, where n is the number of samples and N the number of features. In terms of wall time measures, the time taken to compute a conceptor from the entire MNIST training set (in this case, n = 55000 images and N = 784 pixels, corresponding to the input layer in our networks) is 0.42 seconds of standard notebook CPU time on average. Although we did not implement it in these experiments, incremental online adaptation of conceptors by gradient descent is also possible in principle and would come at a cost of O(N2) per update.
5 CONCLUSION
In this work, we first reviewed the conceptor-based incremental ridge regression algorithm, introduced in section 3.11 of Jaeger (2014) for memory management in recurrent neural networks. Then we derived its stochastic gradient descent version for optimizing the same objective. Finally we designed a conceptor-aided backprop algorithm by applying a conceptor to every linear layer of a feed-forward network. This method uses conceptors to guide gradients of parameters during the backpropagation procedure. As a result, learning a new task interferes only minimally with previously learned tasks, and the amount of already used network capacity can be monitored via the singular value spectra and quota of conceptors.
In Jaeger (2014), different scenarios for continual learning are investigated in a reservoir computing setting. Two extreme cases are obtained when (i) the involved learning tasks are entirely unrelated to each other, versus (ii) all tasks come from the same parametric family of learning tasks. The two cases differ conspicuously with regards to the geometry of involved conceptors, and with regards to opportunities to re-use previously acquired functionality in subsequent learning episodes. The permuted MNIST task is an example of (i) while the disjoint MNIST task rather is of type (ii). Conceptors provide an analytical tool to discuss the “family relatedness” and enabling/disabling conditions for continual learning in geometrical terms. Ongoing and future research is devoted to a comprehensive mathematical analysis of these phenomena which in our view lie at the heart of understanding continual learning.
ACKNOWLEDGMENTS
The work reported in this article was partly funded through the European H2020 collaborative project NeuRAM3 (grant Nr 687299). | 1. What are the similarities and differences between the proposed approach and previous works, particularly the Jaeger 2014 report?
2. What are the original contributions of the paper, and how do they improve upon existing methods?
3. How does the proposed approach compare to other methods in terms of performance and efficiency?
4. What is the significance of the results presented in Figures 1 and 2, and how do they relate to the main contribution of the paper?
5. How does the paper address the issue of catastrophic forgetting, and what are the implications of the proposed approach for this problem? | Review | Review
The paper leaves me guessing which part is a new contribution, and which one is already possible with conceptors as described in the Jaeger 2014 report. Figure (1) in the paper is identical to the one in the (short version of) the Jaeger report but is missing an explicit reference. Figure 2 is almost identical, again a reference to the original would be better.
Conceptors can be trained with a number of approaches (as described both in the 2014 Jaeger tech report and in the JMLR paper), including ridge regression. What I am missing here is a clear indication what is an original contribution of the paper, and what is already possible using the original approach. The fact that additional conceptors can be trained does not appear new for the approach described here. If the presented approach was an improvement over the original conceptors, the evaluation should compare the new and the original version.
The evaluation also leaves me a little confused in an additional dimension: the paper title and abstract suggested that the contribution is about overcoming catastrophic forgetting. The evaluation shows that the approach performs better classifying MNIST digits than another approach. This is nice but doesn't really tell me much about overcoming catastrophic forgetting. |
ICLR | Title
Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation
Abstract
Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, “conceptor-aided backprop” (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophic forgetting. CAB extends these results to deep feedforward networks. On the disjoint and permuted MNIST tasks, CAB outperforms two other methods for coping with catastrophic interference that have recently been proposed.
N/A
Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, “conceptor-aided backprop” (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophic forgetting. CAB extends these results to deep feedforward networks. On the disjoint and permuted MNIST tasks, CAB outperforms two other methods for coping with catastrophic interference that have recently been proposed.
1 INTRODUCTION
Agents with general artificial intelligence are supposed to learn and perform well on multiple tasks. Continual learning refers to the scenarios where a machine learning system can retain previously acquired skills while learning new ones. However, when trained on a sequence of tasks, neural networks usually forget about previous tasks after their weights are adjusted for a new task. This notorious problem known as catastrophic interference (CI) (McCloskey & Cohen, 1989; Ratcliff, 1990; French, 1999; Kumaran et al., 2016) poses a serious challenge towards continual learning.
Many approaches have been proposed to overcome or mitigate the problem of CI in the last three decades (Hinton & Plaut, 1987; French, 1991; Ans & Rousset, 1997; French, 1997; Srivastava et al., 2014). Especially recently, an avalanche of new methods in the deep learning field has brought about dramatic improvements in continual learning in neural networks. Kirkpatrick et al. (2017) introduced a regularization-based method called elastic weight consolidation (EWC), which uses the posterior distribution of parameters for the old tasks as a prior for the new task. They approximated the posterior by a Gaussian distribution with the parameters for old tasks as the mean and the inverse diagonal of the Fisher information matrix as the variance. Lee et al. (2017) introduced two incremental moment matching (IMM) methods called mean-IMM and mode-IMM. Mean-IMM approximates the distribution of parameters for both old and new tasks by a Gaussian distribution, which is estimated by minimizing its KL-divergence from the mixture of two Gaussian posteriors, one for the old task and the other one for the new task. Mode-IMM estimates the mode of this mixture of two Gaussians and uses it as the optimal parameters for both tasks.
In the field of Reservoir Computing (Jaeger, 2001; Maass et al., 2002), an effective solution to CI using conceptors was proposed by Jaeger (2014) to incrementally train a recurrent neural network to generate spatial-temporal signals. Conceptors are a general-purpose neuro-computational mechanism that can be used in a diversity of neural information processing tasks including temporal pattern classification, one-shot learning, human motion pattern generation, de-noising and signal separation (Jaeger, 2017). In this paper, we adopt and extend the method introduced in Jaeger (2014) and propose a conceptor-aided backpropagation (CAB) algorithm to train feed-forward networks. For each layer of a network, CAB computes a conceptor to characterize the linear subspace spanned by the neural activations in that layer that have appeared in already learned tasks. When the network is trained on a new task, CAB uses the conceptor to adjust the gradients given by backpropagation so that the linear transformation restricted to the characterized subspace will be preserved after the
gradient descent procedure. Experiment results of two benchmark tests showed highly competitive performance of CAB.
The rest of this paper is structured as follows. Section 2 introduces conceptors and their application to incremental learning by ridge regression. Section 3 extends the method to stochastic gradient descent and describes the CAB algorithm. Section 4 compares its performance on the permuted and disjoint MNIST tasks to recent methods that address the same problem. Finally we conclude our paper in Section 5.
2 INCREMENTAL RIDGE REGRESSION BY CONCEPTORS
This section reviews the basics of conceptor theory and its application to incrementally training linear readouts of recurrent neural networks as used in reservoir computing. A comprehensive treatment can be found in (Jaeger, 2014).
2.1 CONCEPTORS
In brief, a matrix conceptor C for some vector-valued random variable x ∈ RN is defined as a linear transformation that minimizes the following loss function.
Ex[||x− Cx||2] + α−2||C||2fro (1) where α is a control parameter called aperture and || · ||fro is the Frobenius norm. This optimization problem has a closed-form solution
C = R(R+ α−2I)−1 (2)
where R = Ex[xx>] is the N ×N correlation matrix of x, and I is the N ×N identity matrix. This result given in (2) can be understood by studying the singular value decomposition (SVD) of C. If R = UΣU> is the SVD of R, then the SVD of C is given as USU>, where the singular values si of C can be written in terms of the singular values σi of R: si = σi/(σi + α−2) ∈ [0, 1). In intuitive terms, C is a soft projection matrix on the linear subspace where the samples of x lie. For a vector y in this subspace, C acts like the identity: Cy ≈ y, and when some noise orthogonal to the subspace is added to y, C de-noises: C(y+ ) ≈ y. Figure 1 shows the ellipsoids corresponding to three sets of R3 points. We define the quota Q(C) of a conceptor to be the mean singular values: Q(C) := 1N ∑N i=1 si. Intuitively, the quota measures the fraction of the total dimensions of the entire vector space that is claimed by C.
Moreover, logic operations that satisfy most laws of Boolean logic can be defined on matrix conceptors as the following:
¬C :=I − C, (3) Ci ∨ Cj :=(Ri +Rj)(Ri +Rj + α−2I)−1 (4) Ci ∧ Cj :=¬(¬Ci ∨ ¬Cj) (5)
where ¬C softly projects onto a linear subspace that can be roughly understood as the orthogonal complement of the subspace characterized by C. Ci∨Cj is the conceptor computed from the union of the two sets of sample points from which Ci and Cj are computed. It describes a space that is approximately the sum of linear subspaces characterized by Ci and Cj , respectively. The definition of Ci ∧ Cj reflects de Morgan’s law. Figure 2 illustrates the geometry of these operations.
2.2 INCREMENTAL RIDGE REGRESSION
This subsection explains how conceptors can be applied to master continual learning in a simple linear model trained on a supervised task by ridge regression. The training is done sequentially on multiple input-to-output mapping tasks. This simplified scenario illustrates the working principle of continual learning with conceptors and will later be used repeatedly as a sub-procedure in the CAB algorithm for training multilayer feed-forward networks.
Consider a sequence of m incoming tasks indexed by j. We denote the training dataset for the j-th task by {(xj1, y j 1), · · · , (xjn, yjn)}, where x j i ∈ RN are input vectors and y j i ∈ RM their corresponding target outputs. Whenever the training dataset for a new task is available, the incremental learning method will compute a matrix conceptor Cj for the input variable of the new task using Equation 2 and update the linear model, resulting in a sequence of linear models W 1, . . .Wm such that W j solves not only the j-th task but also all previous tasks: for k ≤ j, yk ≈W jxk. The conceptor Cj is a soft projection matrix onto the linear subspace spanned by input patterns from the j-th task. Then, Aj−1 = C1 ∨ · · · ∨Cj−1 characterizes the memory space already claimed by the tasks 1, . . . , j − 1 and F j = ¬Aj−1, the orthogonal complement of Aj − 1, represents the memory space still free for the j-th task. Here “memory space” refers to the linear space of input vectors. In detail, this method proceeds in the following way:
• Initialization (no task trained yet): W 0 = 0M×N , A0 = 0N×N . • Incremental task learning: For tasks j = 1, . . . ,m do:
1. Store the input vectors from the j-th training dataset of size n into aN ×n sized input collection matrixXj , and store the output vectors into aM×n sized output collection matrix Y j . 2. Compute the conceptor for this task by Cj = Rj(Rj + α−2I)−1, where Rj = 1 nX jXj> 3. Train an increment matrix W jinc (to be added to W j−1, yielding W j), with the crucial
aid of a helper conceptor F j : (a) F j := ¬Aj−1 (comment: this conceptor characterizes the “still disposable”
memory space for the j-th task), (b) T := Y j−(W j−1Xj) (comment: this matrix consists of target values for a linear
regression to compute W jinc), (c) S := F jXj (comment: this matrix consists of input arguments for the linear
regression),
(d) W jinc = ((SS >/n + λ−2I)−1ST>/n)> (comment: carry out the regression,
regularized by λ−2), 4. Update W j : W j = W j−1 +W jinc. 5. Update A : Aj = Aj−1 ∨Cj (comment: this is possible due to the associativity of the ∨ operation on conceptors)
The weight increment W jinc does not interfere much with the previously learned weights W j−1 because the regularization in step 3(d) constrains the row space of W jinc to be only the linear subspace spanned by input arguments defined in 3(c), which are inside the kernel of W j−1 due to the projection by F j . Intuitively speaking, when learning a new task, this algorithm exploits only the components of input vectors in the still unused space (kernel of W j−1, characterized by F j) to compensate errors for the new task and leaves the directions in the already used memory space (row space of W j−1, characterized by Aj−1) intact.
3 CONCEPTOR-AIDED SGD AND BACK-PROP
In this section, we first derive a stochastic gradient descent version of the algorithm described in the previous section, then present the procedure of CAB.
3.1 SGD
In the algorithm introduced in the previous section, W jinc is computed by ridge regression, which offers a closed-form solution to minimize the following cost function
J (W jinc) := E[|W j incs− t| 2] + λ−2|W jinc| 2 fro (6)
where t = yj − W j−1xj , s = F jxj . One can also minimize this cost function by stochastic gradient descent (SGD), which starts from an initial guess of W jinc and repeatedly performs the following update
W jinc ←W j inc − η∇W jincJ (W j inc) (7)
where η is the learning rate and the gradient is given by:
∇W jincJ (W j inc) = 2E[(W j incs− t)s >] + 2λ−2W jinc (8)
Substituting t by yj −W j−1xj and s by F jxj = (I −Aj−1)xj in (8), we get
∇W jincJ (W j inc) = 2E[(W j inc(I −A j−1)xj − yj +W j−1xj)s>] + 2λ−2W jinc (9)
= 2E[(−W jincA j−1xj + (W j−1 +W jinc)x j − yj)s>] + 2λ−2W jinc (10)
Due to the regularization term in the cost function, as the optimization goes on, eventually Winc will null the input components that are not inside the linear subspace characterized by F j , hence W jincA
j−1xj will converge to 0 as the algorithm proceeds. In addition, since W j = W j−1 +W jinc, (10) can be simplified to
∇W jincJ (W j inc) = 2E[(W jxj − yj)s>] + 2λ−2W jinc (11)
Adding W j−1 to both sides of (7), we obtain the update rule for W j :
W j ←W j − 2ηE[es>] + 2ηλ−2W jinc (12)
where e := W jxj − yj . In practice, at every iteration, the expected value can be approximated by a mini-batch of size nB , indexed by iB :
Ê[es>] = 1
nB L∑ iB=0 (W jxjiB − y j iB )(F jxjiB ) > = 1 nB L∑ iB=0 (W jxjiB − y j iB )xj>iB F j (13)
where the transpose for F j can be dropped since it is symmetric.
If we only train the j−th task without considering the previous tasks, the update rule given by normal SGD is
W j ←W j − 2ηE[exj>] + 2ηλ−2W j (14)
Comparing this to the update rule in (12), we notice two modifications when a conceptor is adopted to avoid CI: first, the gradient of weights are calculated using the conceptor-projected input vector s = F jxj instead of the original input vector xj ; second, regularization is done on the weight increment W jinc rather than the final weight W
j . These two modifications lead to our design of the conceptor-aided algorithm for training multilayer feed-forward networks.
3.2 BACKPROP
The basic idea of CAB is to guide the gradients of the loss function on every linear component of the network by a matrix conceptor computed from previous tasks during error back-propagation (Rumelhart et al., 1986), repeatedly applying the conceptor-aided SGD technique introduced in the previous section in every layer.
Consider a feed-forward network with L + 1 layers, indexed by l = 0, . . . L, such that the 0-th and the L-th layers are the input and output layers respectively. W (l) represents the linear connections between the (l− 1)-th and the l-th layer, where we refer to the former as the pre-synaptic layer with respect to W (l), and to the latter as the post-synaptic layer. We denote by N (l) the size of the l-th layer (excluding the bias unit) and A(l) j a conceptor characterizing the memory space in the l-th layer used up by the first j tasks. Let σ(·) be the activation function of the nonlinear neurons and θ all the parameters of the network to be trained. Then the incremental training method with CAB proceeds as follows:
• Initialization (no task trained yet): ∀l = 0, . . . , L− 1, A(l)0 := 0(N(l)+1)×(N(l)+1), and randomly initialize W (l+1) 0 to be a matrix of size N (l+1) × (N (l) + 1).
• Incremental task learning: For j = 1, . . . ,m do:
1. ∀l = 0, . . . , L−1, F (l)j = ¬A(l)(j−1). (This conceptor characterizes the still disposable vector space in layer l for learning task j)
2. Update the network parameters θ(j−1) obtained after training the first j − 1 tasks to θj by stochastic gradient descent, where the gradients are computed by CAB instead of the classical backprop. Algorithms 1 and 2 detail the forward and backward pass of CAB, respectively. Different from classical backprop, the gradients are guided by a matrix conceptor F (l) j , such that in each layer only the activity in the still disposable
memory space will contribute to the gradient. Note that the conceptors remain the same until convergence of the network for task j.
3. After training on the j-th task, run the forward procedure again on a batch of nB input
vectors, indexed by iB , taken from the j-th training dataset, to collect activations h (l) iB
j
of each layer into a N (l) × nB sized matrix H(l) j , and set the correlation matrix R(l) j
= 1nBH (l)j(H(l) j )>.
4. Compute a conceptor on the l-th layer for the j-th pattern by C(l) j = R(l) j (R(l) j +
α−2IN(l)×N(l)) −1,∀l = 0, . . . , L − 1. Finding an optimal aperture can be done by a cross-validation search1.
5. Update the conceptor for already used space in every layer: A(l) j = A(l) j ∨
C(l) j ,∀l = 0, . . . , L− 1.
Algorithm 1 The forward procedure of conceptor-aided backprop, adapted from the traditional backprop. Input vectors are passed through a feed-forward network to compute the cost function. L(ŷj , yj) denotes the loss for the j-th task, to which a regularizer Ω(θjinc) = Ω(θj − θj−1) = ||θj − θj−1||2fro is added to obtain the total cost J , where θ contains all the weights (biases are considered as weights connected to the bias units). The increment of parameters rather than the parameters themselves are regularized, similar to the conceptor-aided SGD. Require: Network depth, l Require: W (l)j , l ∈ {1, . . . , L}, the weight matrices of the network Require: xj , one input vector of the j-th task Require: yj , the target output for xj
1: h(0) = xj 2: for l = 1, . . . L do 3: b(l) = [h(l−1)>, 1]>, include the bias unit 4: a(l) = W (l) j b(l) 5: h(l) = σ(a(l)) 6: end for 7: ŷj = h(l) 8: J = L(ŷj , yj) + λΩ(θjinc)
Algorithm 2 The backward procedure of conceptor-aided backprop for the j-th task, adapted from the traditional backprop. The gradient g of the loss function L on the activations a(l) represents the error for the linear transformation W (l) j between the (l − 1)-th and the l−th layers. In the standard backprop algorithm, the gradient of L on W (l)j is computed as an outer product of the post-synaptic errors g and the pre-synaptic activities h(l−1). This resembles the computation of the gradient in the linear SGD algorithm, which motivates us to apply conceptors in a similar fashion as in the conceptor-aided SGD. Specifically, we project the gradient ∇W (l)jL by the matrix conceptor F (l−1) j that indicates the free memory space on the pre-synaptic layer.
1:
g ← ∇ŷJ = ∇ŷL(ŷ, y)
2: for l = L,L− 1, . . . , 1 do 3: Convert the gradient on the layer’s output into a gradient on the pre-nonlinearity activation
( denotes element-wise multiplication):
g ← ∇a(l)J = g σ′(a(l))
4: Compute the gradient of weights, project it by F (l−1) j , and add it to the regularization term
on the increment:
∇W (l)jJ =g(F (l−1)jb(l−1))> + λ∇W (l)jΩ(θ j inc) = gb (l−1)>F (l−1) j + 2λW (l) inc j
=gb(l−1) > F (l−1) j + 2λ(W (l) j −W (l) j−1 )
5: Propagate the gradients w.r.t. the next lower-level hidden layers activations:
g ← ∇h(l−1)J = W (l) j> g
6: end for
4 EXPERIMENTS
4.1 PERMUTED MNIST EXPERIMENT
To test the performance of CAB, we evaluated it on the permuted MNIST experiment (Srivastava et al., 2013; Goodfellow et al., 2014; Kirkpatrick et al., 2017; Lee et al., 2017), where a sequence of pattern recognition tasks are created from the MNIST dataset (LeCun et al., 1998). For each task, a random permutation of input image pixels is generated and applied to all images in MNIST to obtain a new shuffled dataset, equally difficult to recognize as the original one, the objective of each task is to recognize these images with shuffled pixels.
For a proof-of-concept demonstration, we trained a simple but sufficient feed-forward network with [784-100-10] of neurons to classify 10 permuted MNIST datasets. The network has logistic sigmoid neurons in both hidden and output layers, and is trained with mean squared error as the cost function. Vanilla SGD was used in all experiments to optimize the cost function. Learning rate and aperture were set to 0.1 and 4, respectively. For comparison, we also tested EWC on the same task with the same network architecture, based on the implementation by Seff (2017). The parameters chosen for the EWC algorithm were 0.01 for the learning rate and 15 for the weight of the Fisher penalty term. Figure 3 shows the performance of CAB on this task, the average testing accuracy is 95.2% after learning all 10 tasks sequentially. Although a fair amount of effort was spent on searching for optimal parameters for EWC, the accuracies shown here might still not reflect its best performance. However, the same experiment with EWC was also conducted in Kemker et al. (2017), where the authors reimplemented EWC on a network with higher capacity (2 hidden layers and 400 ReLU neurons per layer) and the resulting average accuracy after learning 10 tasks sequentially was shown to be around 93%.
Since all tasks are generated by permuting the same dataset, the portion of the input space occupied by each of them should have the same size. However, as more tasks are learned, the chance that the space of a new task will overlap with the already used input space increases. Figure 4 shows the singular value spectra and quota of the input and hidden layer conceptors every time after a new task is learned. As the incremental learning proceeds, it becomes less likely for a new task to be in the free space. For example, the second task increases the quota of the input layer memory space by 0.1, whereas the 10th task increases it by only 0.03. However, CAB still manages to make the network learn new tasks based on their input components in the non-overlapping space.
1Jaeger (2014) proposes a number of methods for analytical aperture optimization. It remains for future work to determine how these methods transfer to our situation.
4.2 DISJOINT MNIST EXPERIMENT
We then applied CAB to categorize the disjoint MNIST datasets into 10 classes (Srivastava et al., 2013; Lee et al., 2017). In this experiment, the original MNIST dataset is divided into two disjoint datasets with the first one consisting of data for the first five digits (0 to 4), and the second one of the remaining five digits (5 to 9). This task requires a network to learn these two datasets one after the other, then examines its performance of classifying the entire MNIST testing images into 10 classes. The current state-of-the-art accuracy on this task, averaged over 10 learning trials, is 94.12(±0.27)%, achieved by Lee et al. (2017) using IMM. They also tested EWC on the same task and the average accuracy was 52.72(±1.36)%. To test our method, we trained a feed-forward network with [784-800-10] neurons. Logistic sigmoid nonlinearities were used in both hidden and output layers, and the network was trained with vanilla SGD to minimize mean squared errors. The aperture α = 9 was used for all conceptors on all layers, learning rate η and regularization coefficient λ were chosen to be 0.1 and 0.005 respectively. The accuracy of CAB on this task, measured by repeating the experiment 10 times, is 94.91(±0.30)%. It is worth mentioning that the network used by Lee et al. (2017) for testing IMM and EWC had [784-800-800-10] rectified linear units (ReLU), so CAB achieved better performance with fewer layers and neurons.
4.3 COMPUTATIONAL COST
If a conceptor is computed by ridge regression, the time complexity is O(nN2 + N3) when the design matrix is dense, where n is the number of samples and N the number of features. In terms of wall time measures, the time taken to compute a conceptor from the entire MNIST training set (in this case, n = 55000 images and N = 784 pixels, corresponding to the input layer in our networks) is 0.42 seconds of standard notebook CPU time on average. Although we did not implement it in these experiments, incremental online adaptation of conceptors by gradient descent is also possible in principle and would come at a cost of O(N2) per update.
5 CONCLUSION
In this work, we first reviewed the conceptor-based incremental ridge regression algorithm, introduced in section 3.11 of Jaeger (2014) for memory management in recurrent neural networks. Then we derived its stochastic gradient descent version for optimizing the same objective. Finally we designed a conceptor-aided backprop algorithm by applying a conceptor to every linear layer of a feed-forward network. This method uses conceptors to guide gradients of parameters during the backpropagation procedure. As a result, learning a new task interferes only minimally with previously learned tasks, and the amount of already used network capacity can be monitored via the singular value spectra and quota of conceptors.
In Jaeger (2014), different scenarios for continual learning are investigated in a reservoir computing setting. Two extreme cases are obtained when (i) the involved learning tasks are entirely unrelated to each other, versus (ii) all tasks come from the same parametric family of learning tasks. The two cases differ conspicuously with regards to the geometry of involved conceptors, and with regards to opportunities to re-use previously acquired functionality in subsequent learning episodes. The permuted MNIST task is an example of (i) while the disjoint MNIST task rather is of type (ii). Conceptors provide an analytical tool to discuss the “family relatedness” and enabling/disabling conditions for continual learning in geometrical terms. Ongoing and future research is devoted to a comprehensive mathematical analysis of these phenomena which in our view lie at the heart of understanding continual learning.
ACKNOWLEDGMENTS
The work reported in this article was partly funded through the European H2020 collaborative project NeuRAM3 (grant Nr 687299). | 1. What is the focus of the paper regarding continual learning in neural networks?
2. What are the strengths of the proposed approach, particularly in terms of conceptors and their application to Stochastic Gradient Descent and Backpropagation?
3. What are the weaknesses of the paper, especially regarding computational costs and the need for more grounded vocabulary?
4. Do you have any concerns regarding the limited scope of the experiments on MNIST?
5. How does the reviewer assess the novelty and potential impact of the paper's contribution to the field of continual learning? | Review | Review
[Reviewed on January 12th]
This article applies the notion of “conceptors” -- a form of regulariser introduced by the same author a few years ago, exhibiting appealing boolean logic pseudo-operations -- to prevent forgetting in continual learning,more precisely in the training of neural networks on sequential tasks. It proposes itself as an improvement over the main recent development of the field, namely Elastic Weight Consolidation. After a brief and clear introduction to conceptors and their application to ridge regression, the authors explain how to inject conceptors into Stochastic Gradient Descent and finally, the real innovation of the paper, into Backpropagation. Follows a section of experiments on variants of MNIST commonly used for continual learning.
Continual learning in neural networks is a hot topic, and this article contributes a very interesting idea. The notion of conceptors is appealing in this particular use for its interpretation in terms of regularizer and in terms of Boolean logic. The numeric examples, although quite toy, provide a clear illustration.
A few things are still missing to back the strong claims of this paper:
* Some considerations of the computational costs: the reliance on the full NxN correlation matrix R makes me fear it might be costly, as it is applied to every layer of the neural networks and hence is the largest number of units in a layer. This is of course much lighter than if it were the covariance matrix of all the weights, which would be daunting, but still deserves to be addressed, if only with wall time measures.
* It could also be welcome to use a more grounded vocabulary, e.g. on p.2 “Figure 1 shows examples of conceptors computer from three clouds of sample state points coming from a hypothetical 3-neuron recurrent network that was drive with input signals from three difference sources” could be much more simply said as “Figure 1 shows the ellipses corresponding to three sets of R^3 points”. Being less grandiose would make the value of this article nicely on its own.
* Some examples beyond the contrived MNIST toy examples would be welcome. For example, the main method this article is compared to (EWC) had a very strong section on Reinforcement learning examples in the Atari framework, not only as an illustration but also as a motivation. I realise not everyone has the computational or engineering resources to try extensively on multiple benchmarks from classification to reinforcement learning. Nevertheless, without going to that extreme, it might be worth adding an extra demo on something bigger than MNIST. The authors transparently explain in their answer that they do not (yet!) belong to the deep learning community and hope finding some collaborations to pursue this further. If I may make a suggestion, I think their work would get much stronger impact by doing it the reverse way: first finding the collaboration, then adding this extra empirical results, which then leads to a bigger impact publication.
The later point would normally make me attribute a score of "6: Marginally above acceptance threshold" by current DL community standards, but because there is such a pressing need for methods to tackle this problem, and because this article can generate thinking along new lines about this, I give it a 7 : Good paper, accept. |
ICLR | Title
Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation
Abstract
Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, “conceptor-aided backprop” (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophic forgetting. CAB extends these results to deep feedforward networks. On the disjoint and permuted MNIST tasks, CAB outperforms two other methods for coping with catastrophic interference that have recently been proposed.
N/A
Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, “conceptor-aided backprop” (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophic forgetting. CAB extends these results to deep feedforward networks. On the disjoint and permuted MNIST tasks, CAB outperforms two other methods for coping with catastrophic interference that have recently been proposed.
1 INTRODUCTION
Agents with general artificial intelligence are supposed to learn and perform well on multiple tasks. Continual learning refers to the scenarios where a machine learning system can retain previously acquired skills while learning new ones. However, when trained on a sequence of tasks, neural networks usually forget about previous tasks after their weights are adjusted for a new task. This notorious problem known as catastrophic interference (CI) (McCloskey & Cohen, 1989; Ratcliff, 1990; French, 1999; Kumaran et al., 2016) poses a serious challenge towards continual learning.
Many approaches have been proposed to overcome or mitigate the problem of CI in the last three decades (Hinton & Plaut, 1987; French, 1991; Ans & Rousset, 1997; French, 1997; Srivastava et al., 2014). Especially recently, an avalanche of new methods in the deep learning field has brought about dramatic improvements in continual learning in neural networks. Kirkpatrick et al. (2017) introduced a regularization-based method called elastic weight consolidation (EWC), which uses the posterior distribution of parameters for the old tasks as a prior for the new task. They approximated the posterior by a Gaussian distribution with the parameters for old tasks as the mean and the inverse diagonal of the Fisher information matrix as the variance. Lee et al. (2017) introduced two incremental moment matching (IMM) methods called mean-IMM and mode-IMM. Mean-IMM approximates the distribution of parameters for both old and new tasks by a Gaussian distribution, which is estimated by minimizing its KL-divergence from the mixture of two Gaussian posteriors, one for the old task and the other one for the new task. Mode-IMM estimates the mode of this mixture of two Gaussians and uses it as the optimal parameters for both tasks.
In the field of Reservoir Computing (Jaeger, 2001; Maass et al., 2002), an effective solution to CI using conceptors was proposed by Jaeger (2014) to incrementally train a recurrent neural network to generate spatial-temporal signals. Conceptors are a general-purpose neuro-computational mechanism that can be used in a diversity of neural information processing tasks including temporal pattern classification, one-shot learning, human motion pattern generation, de-noising and signal separation (Jaeger, 2017). In this paper, we adopt and extend the method introduced in Jaeger (2014) and propose a conceptor-aided backpropagation (CAB) algorithm to train feed-forward networks. For each layer of a network, CAB computes a conceptor to characterize the linear subspace spanned by the neural activations in that layer that have appeared in already learned tasks. When the network is trained on a new task, CAB uses the conceptor to adjust the gradients given by backpropagation so that the linear transformation restricted to the characterized subspace will be preserved after the
gradient descent procedure. Experiment results of two benchmark tests showed highly competitive performance of CAB.
The rest of this paper is structured as follows. Section 2 introduces conceptors and their application to incremental learning by ridge regression. Section 3 extends the method to stochastic gradient descent and describes the CAB algorithm. Section 4 compares its performance on the permuted and disjoint MNIST tasks to recent methods that address the same problem. Finally we conclude our paper in Section 5.
2 INCREMENTAL RIDGE REGRESSION BY CONCEPTORS
This section reviews the basics of conceptor theory and its application to incrementally training linear readouts of recurrent neural networks as used in reservoir computing. A comprehensive treatment can be found in (Jaeger, 2014).
2.1 CONCEPTORS
In brief, a matrix conceptor C for some vector-valued random variable x ∈ RN is defined as a linear transformation that minimizes the following loss function.
Ex[||x− Cx||2] + α−2||C||2fro (1) where α is a control parameter called aperture and || · ||fro is the Frobenius norm. This optimization problem has a closed-form solution
C = R(R+ α−2I)−1 (2)
where R = Ex[xx>] is the N ×N correlation matrix of x, and I is the N ×N identity matrix. This result given in (2) can be understood by studying the singular value decomposition (SVD) of C. If R = UΣU> is the SVD of R, then the SVD of C is given as USU>, where the singular values si of C can be written in terms of the singular values σi of R: si = σi/(σi + α−2) ∈ [0, 1). In intuitive terms, C is a soft projection matrix on the linear subspace where the samples of x lie. For a vector y in this subspace, C acts like the identity: Cy ≈ y, and when some noise orthogonal to the subspace is added to y, C de-noises: C(y+ ) ≈ y. Figure 1 shows the ellipsoids corresponding to three sets of R3 points. We define the quota Q(C) of a conceptor to be the mean singular values: Q(C) := 1N ∑N i=1 si. Intuitively, the quota measures the fraction of the total dimensions of the entire vector space that is claimed by C.
Moreover, logic operations that satisfy most laws of Boolean logic can be defined on matrix conceptors as the following:
¬C :=I − C, (3) Ci ∨ Cj :=(Ri +Rj)(Ri +Rj + α−2I)−1 (4) Ci ∧ Cj :=¬(¬Ci ∨ ¬Cj) (5)
where ¬C softly projects onto a linear subspace that can be roughly understood as the orthogonal complement of the subspace characterized by C. Ci∨Cj is the conceptor computed from the union of the two sets of sample points from which Ci and Cj are computed. It describes a space that is approximately the sum of linear subspaces characterized by Ci and Cj , respectively. The definition of Ci ∧ Cj reflects de Morgan’s law. Figure 2 illustrates the geometry of these operations.
2.2 INCREMENTAL RIDGE REGRESSION
This subsection explains how conceptors can be applied to master continual learning in a simple linear model trained on a supervised task by ridge regression. The training is done sequentially on multiple input-to-output mapping tasks. This simplified scenario illustrates the working principle of continual learning with conceptors and will later be used repeatedly as a sub-procedure in the CAB algorithm for training multilayer feed-forward networks.
Consider a sequence of m incoming tasks indexed by j. We denote the training dataset for the j-th task by {(xj1, y j 1), · · · , (xjn, yjn)}, where x j i ∈ RN are input vectors and y j i ∈ RM their corresponding target outputs. Whenever the training dataset for a new task is available, the incremental learning method will compute a matrix conceptor Cj for the input variable of the new task using Equation 2 and update the linear model, resulting in a sequence of linear models W 1, . . .Wm such that W j solves not only the j-th task but also all previous tasks: for k ≤ j, yk ≈W jxk. The conceptor Cj is a soft projection matrix onto the linear subspace spanned by input patterns from the j-th task. Then, Aj−1 = C1 ∨ · · · ∨Cj−1 characterizes the memory space already claimed by the tasks 1, . . . , j − 1 and F j = ¬Aj−1, the orthogonal complement of Aj − 1, represents the memory space still free for the j-th task. Here “memory space” refers to the linear space of input vectors. In detail, this method proceeds in the following way:
• Initialization (no task trained yet): W 0 = 0M×N , A0 = 0N×N . • Incremental task learning: For tasks j = 1, . . . ,m do:
1. Store the input vectors from the j-th training dataset of size n into aN ×n sized input collection matrixXj , and store the output vectors into aM×n sized output collection matrix Y j . 2. Compute the conceptor for this task by Cj = Rj(Rj + α−2I)−1, where Rj = 1 nX jXj> 3. Train an increment matrix W jinc (to be added to W j−1, yielding W j), with the crucial
aid of a helper conceptor F j : (a) F j := ¬Aj−1 (comment: this conceptor characterizes the “still disposable”
memory space for the j-th task), (b) T := Y j−(W j−1Xj) (comment: this matrix consists of target values for a linear
regression to compute W jinc), (c) S := F jXj (comment: this matrix consists of input arguments for the linear
regression),
(d) W jinc = ((SS >/n + λ−2I)−1ST>/n)> (comment: carry out the regression,
regularized by λ−2), 4. Update W j : W j = W j−1 +W jinc. 5. Update A : Aj = Aj−1 ∨Cj (comment: this is possible due to the associativity of the ∨ operation on conceptors)
The weight increment W jinc does not interfere much with the previously learned weights W j−1 because the regularization in step 3(d) constrains the row space of W jinc to be only the linear subspace spanned by input arguments defined in 3(c), which are inside the kernel of W j−1 due to the projection by F j . Intuitively speaking, when learning a new task, this algorithm exploits only the components of input vectors in the still unused space (kernel of W j−1, characterized by F j) to compensate errors for the new task and leaves the directions in the already used memory space (row space of W j−1, characterized by Aj−1) intact.
3 CONCEPTOR-AIDED SGD AND BACK-PROP
In this section, we first derive a stochastic gradient descent version of the algorithm described in the previous section, then present the procedure of CAB.
3.1 SGD
In the algorithm introduced in the previous section, W jinc is computed by ridge regression, which offers a closed-form solution to minimize the following cost function
J (W jinc) := E[|W j incs− t| 2] + λ−2|W jinc| 2 fro (6)
where t = yj − W j−1xj , s = F jxj . One can also minimize this cost function by stochastic gradient descent (SGD), which starts from an initial guess of W jinc and repeatedly performs the following update
W jinc ←W j inc − η∇W jincJ (W j inc) (7)
where η is the learning rate and the gradient is given by:
∇W jincJ (W j inc) = 2E[(W j incs− t)s >] + 2λ−2W jinc (8)
Substituting t by yj −W j−1xj and s by F jxj = (I −Aj−1)xj in (8), we get
∇W jincJ (W j inc) = 2E[(W j inc(I −A j−1)xj − yj +W j−1xj)s>] + 2λ−2W jinc (9)
= 2E[(−W jincA j−1xj + (W j−1 +W jinc)x j − yj)s>] + 2λ−2W jinc (10)
Due to the regularization term in the cost function, as the optimization goes on, eventually Winc will null the input components that are not inside the linear subspace characterized by F j , hence W jincA
j−1xj will converge to 0 as the algorithm proceeds. In addition, since W j = W j−1 +W jinc, (10) can be simplified to
∇W jincJ (W j inc) = 2E[(W jxj − yj)s>] + 2λ−2W jinc (11)
Adding W j−1 to both sides of (7), we obtain the update rule for W j :
W j ←W j − 2ηE[es>] + 2ηλ−2W jinc (12)
where e := W jxj − yj . In practice, at every iteration, the expected value can be approximated by a mini-batch of size nB , indexed by iB :
Ê[es>] = 1
nB L∑ iB=0 (W jxjiB − y j iB )(F jxjiB ) > = 1 nB L∑ iB=0 (W jxjiB − y j iB )xj>iB F j (13)
where the transpose for F j can be dropped since it is symmetric.
If we only train the j−th task without considering the previous tasks, the update rule given by normal SGD is
W j ←W j − 2ηE[exj>] + 2ηλ−2W j (14)
Comparing this to the update rule in (12), we notice two modifications when a conceptor is adopted to avoid CI: first, the gradient of weights are calculated using the conceptor-projected input vector s = F jxj instead of the original input vector xj ; second, regularization is done on the weight increment W jinc rather than the final weight W
j . These two modifications lead to our design of the conceptor-aided algorithm for training multilayer feed-forward networks.
3.2 BACKPROP
The basic idea of CAB is to guide the gradients of the loss function on every linear component of the network by a matrix conceptor computed from previous tasks during error back-propagation (Rumelhart et al., 1986), repeatedly applying the conceptor-aided SGD technique introduced in the previous section in every layer.
Consider a feed-forward network with L + 1 layers, indexed by l = 0, . . . L, such that the 0-th and the L-th layers are the input and output layers respectively. W (l) represents the linear connections between the (l− 1)-th and the l-th layer, where we refer to the former as the pre-synaptic layer with respect to W (l), and to the latter as the post-synaptic layer. We denote by N (l) the size of the l-th layer (excluding the bias unit) and A(l) j a conceptor characterizing the memory space in the l-th layer used up by the first j tasks. Let σ(·) be the activation function of the nonlinear neurons and θ all the parameters of the network to be trained. Then the incremental training method with CAB proceeds as follows:
• Initialization (no task trained yet): ∀l = 0, . . . , L− 1, A(l)0 := 0(N(l)+1)×(N(l)+1), and randomly initialize W (l+1) 0 to be a matrix of size N (l+1) × (N (l) + 1).
• Incremental task learning: For j = 1, . . . ,m do:
1. ∀l = 0, . . . , L−1, F (l)j = ¬A(l)(j−1). (This conceptor characterizes the still disposable vector space in layer l for learning task j)
2. Update the network parameters θ(j−1) obtained after training the first j − 1 tasks to θj by stochastic gradient descent, where the gradients are computed by CAB instead of the classical backprop. Algorithms 1 and 2 detail the forward and backward pass of CAB, respectively. Different from classical backprop, the gradients are guided by a matrix conceptor F (l) j , such that in each layer only the activity in the still disposable
memory space will contribute to the gradient. Note that the conceptors remain the same until convergence of the network for task j.
3. After training on the j-th task, run the forward procedure again on a batch of nB input
vectors, indexed by iB , taken from the j-th training dataset, to collect activations h (l) iB
j
of each layer into a N (l) × nB sized matrix H(l) j , and set the correlation matrix R(l) j
= 1nBH (l)j(H(l) j )>.
4. Compute a conceptor on the l-th layer for the j-th pattern by C(l) j = R(l) j (R(l) j +
α−2IN(l)×N(l)) −1,∀l = 0, . . . , L − 1. Finding an optimal aperture can be done by a cross-validation search1.
5. Update the conceptor for already used space in every layer: A(l) j = A(l) j ∨
C(l) j ,∀l = 0, . . . , L− 1.
Algorithm 1 The forward procedure of conceptor-aided backprop, adapted from the traditional backprop. Input vectors are passed through a feed-forward network to compute the cost function. L(ŷj , yj) denotes the loss for the j-th task, to which a regularizer Ω(θjinc) = Ω(θj − θj−1) = ||θj − θj−1||2fro is added to obtain the total cost J , where θ contains all the weights (biases are considered as weights connected to the bias units). The increment of parameters rather than the parameters themselves are regularized, similar to the conceptor-aided SGD. Require: Network depth, l Require: W (l)j , l ∈ {1, . . . , L}, the weight matrices of the network Require: xj , one input vector of the j-th task Require: yj , the target output for xj
1: h(0) = xj 2: for l = 1, . . . L do 3: b(l) = [h(l−1)>, 1]>, include the bias unit 4: a(l) = W (l) j b(l) 5: h(l) = σ(a(l)) 6: end for 7: ŷj = h(l) 8: J = L(ŷj , yj) + λΩ(θjinc)
Algorithm 2 The backward procedure of conceptor-aided backprop for the j-th task, adapted from the traditional backprop. The gradient g of the loss function L on the activations a(l) represents the error for the linear transformation W (l) j between the (l − 1)-th and the l−th layers. In the standard backprop algorithm, the gradient of L on W (l)j is computed as an outer product of the post-synaptic errors g and the pre-synaptic activities h(l−1). This resembles the computation of the gradient in the linear SGD algorithm, which motivates us to apply conceptors in a similar fashion as in the conceptor-aided SGD. Specifically, we project the gradient ∇W (l)jL by the matrix conceptor F (l−1) j that indicates the free memory space on the pre-synaptic layer.
1:
g ← ∇ŷJ = ∇ŷL(ŷ, y)
2: for l = L,L− 1, . . . , 1 do 3: Convert the gradient on the layer’s output into a gradient on the pre-nonlinearity activation
( denotes element-wise multiplication):
g ← ∇a(l)J = g σ′(a(l))
4: Compute the gradient of weights, project it by F (l−1) j , and add it to the regularization term
on the increment:
∇W (l)jJ =g(F (l−1)jb(l−1))> + λ∇W (l)jΩ(θ j inc) = gb (l−1)>F (l−1) j + 2λW (l) inc j
=gb(l−1) > F (l−1) j + 2λ(W (l) j −W (l) j−1 )
5: Propagate the gradients w.r.t. the next lower-level hidden layers activations:
g ← ∇h(l−1)J = W (l) j> g
6: end for
4 EXPERIMENTS
4.1 PERMUTED MNIST EXPERIMENT
To test the performance of CAB, we evaluated it on the permuted MNIST experiment (Srivastava et al., 2013; Goodfellow et al., 2014; Kirkpatrick et al., 2017; Lee et al., 2017), where a sequence of pattern recognition tasks are created from the MNIST dataset (LeCun et al., 1998). For each task, a random permutation of input image pixels is generated and applied to all images in MNIST to obtain a new shuffled dataset, equally difficult to recognize as the original one, the objective of each task is to recognize these images with shuffled pixels.
For a proof-of-concept demonstration, we trained a simple but sufficient feed-forward network with [784-100-10] of neurons to classify 10 permuted MNIST datasets. The network has logistic sigmoid neurons in both hidden and output layers, and is trained with mean squared error as the cost function. Vanilla SGD was used in all experiments to optimize the cost function. Learning rate and aperture were set to 0.1 and 4, respectively. For comparison, we also tested EWC on the same task with the same network architecture, based on the implementation by Seff (2017). The parameters chosen for the EWC algorithm were 0.01 for the learning rate and 15 for the weight of the Fisher penalty term. Figure 3 shows the performance of CAB on this task, the average testing accuracy is 95.2% after learning all 10 tasks sequentially. Although a fair amount of effort was spent on searching for optimal parameters for EWC, the accuracies shown here might still not reflect its best performance. However, the same experiment with EWC was also conducted in Kemker et al. (2017), where the authors reimplemented EWC on a network with higher capacity (2 hidden layers and 400 ReLU neurons per layer) and the resulting average accuracy after learning 10 tasks sequentially was shown to be around 93%.
Since all tasks are generated by permuting the same dataset, the portion of the input space occupied by each of them should have the same size. However, as more tasks are learned, the chance that the space of a new task will overlap with the already used input space increases. Figure 4 shows the singular value spectra and quota of the input and hidden layer conceptors every time after a new task is learned. As the incremental learning proceeds, it becomes less likely for a new task to be in the free space. For example, the second task increases the quota of the input layer memory space by 0.1, whereas the 10th task increases it by only 0.03. However, CAB still manages to make the network learn new tasks based on their input components in the non-overlapping space.
1Jaeger (2014) proposes a number of methods for analytical aperture optimization. It remains for future work to determine how these methods transfer to our situation.
4.2 DISJOINT MNIST EXPERIMENT
We then applied CAB to categorize the disjoint MNIST datasets into 10 classes (Srivastava et al., 2013; Lee et al., 2017). In this experiment, the original MNIST dataset is divided into two disjoint datasets with the first one consisting of data for the first five digits (0 to 4), and the second one of the remaining five digits (5 to 9). This task requires a network to learn these two datasets one after the other, then examines its performance of classifying the entire MNIST testing images into 10 classes. The current state-of-the-art accuracy on this task, averaged over 10 learning trials, is 94.12(±0.27)%, achieved by Lee et al. (2017) using IMM. They also tested EWC on the same task and the average accuracy was 52.72(±1.36)%. To test our method, we trained a feed-forward network with [784-800-10] neurons. Logistic sigmoid nonlinearities were used in both hidden and output layers, and the network was trained with vanilla SGD to minimize mean squared errors. The aperture α = 9 was used for all conceptors on all layers, learning rate η and regularization coefficient λ were chosen to be 0.1 and 0.005 respectively. The accuracy of CAB on this task, measured by repeating the experiment 10 times, is 94.91(±0.30)%. It is worth mentioning that the network used by Lee et al. (2017) for testing IMM and EWC had [784-800-800-10] rectified linear units (ReLU), so CAB achieved better performance with fewer layers and neurons.
4.3 COMPUTATIONAL COST
If a conceptor is computed by ridge regression, the time complexity is O(nN2 + N3) when the design matrix is dense, where n is the number of samples and N the number of features. In terms of wall time measures, the time taken to compute a conceptor from the entire MNIST training set (in this case, n = 55000 images and N = 784 pixels, corresponding to the input layer in our networks) is 0.42 seconds of standard notebook CPU time on average. Although we did not implement it in these experiments, incremental online adaptation of conceptors by gradient descent is also possible in principle and would come at a cost of O(N2) per update.
5 CONCLUSION
In this work, we first reviewed the conceptor-based incremental ridge regression algorithm, introduced in section 3.11 of Jaeger (2014) for memory management in recurrent neural networks. Then we derived its stochastic gradient descent version for optimizing the same objective. Finally we designed a conceptor-aided backprop algorithm by applying a conceptor to every linear layer of a feed-forward network. This method uses conceptors to guide gradients of parameters during the backpropagation procedure. As a result, learning a new task interferes only minimally with previously learned tasks, and the amount of already used network capacity can be monitored via the singular value spectra and quota of conceptors.
In Jaeger (2014), different scenarios for continual learning are investigated in a reservoir computing setting. Two extreme cases are obtained when (i) the involved learning tasks are entirely unrelated to each other, versus (ii) all tasks come from the same parametric family of learning tasks. The two cases differ conspicuously with regards to the geometry of involved conceptors, and with regards to opportunities to re-use previously acquired functionality in subsequent learning episodes. The permuted MNIST task is an example of (i) while the disjoint MNIST task rather is of type (ii). Conceptors provide an analytical tool to discuss the “family relatedness” and enabling/disabling conditions for continual learning in geometrical terms. Ongoing and future research is devoted to a comprehensive mathematical analysis of these phenomena which in our view lie at the heart of understanding continual learning.
ACKNOWLEDGMENTS
The work reported in this article was partly funded through the European H2020 collaborative project NeuRAM3 (grant Nr 687299). | 1. What is the focus of the paper regarding learning new tasks without interfering with previous tasks?
2. What are the strengths and weaknesses of the proposed method, particularly when compared to other approaches like EWC and IMM?
3. How does the reviewer assess the significance and relevance of the paper's content to the community?
4. Are there any concerns or limitations regarding the method's applicability and effectiveness in real-life scenarios?
5. What are the questions left unanswered by the paper, specifically concerning the broader context of continual learning and its optimal approach? | Review | Review
This paper introduces a method for learning new tasks, without interfering previous tasks, using conceptors. This method originates from linear algebra, where a the network tries to algebraically infer the main subspace where previous tasks were learned, and make the network learn the new task in a new sub-space which is "unused" until the present task in hand.
The paper starts with describing the method and giving some context for the method and previous methods that deal with the same problem. In Section 2 the authors review conceptors. This method is algebraic method closely related to spanning sub spaces and SVD. The main advantage of using conceptors is their trait of boolean logics: i.e., their ability to be added and multiplied naturally. In section 3 the authors elaborate on reviewed ocnceptors method and show how to adapt this algorithm to SGD with back-propagation. The authors provide a version with batch SGD as well.
In Section 4, the authors show their method on permuted MNIST. They compare the method to EWC with the same architecture. They show that their method more efficiently suffers on permuted MNIST from less degradation. Also, they compared the method to EWC and IMM on disjoint MNIST and again got the best performance.
In general, unlike what the authors suggest, I do not believe this method is how biological agents perform their tasks in real life. Nevertheless, the authors show that their method indeed reduce the interference generated by a new task on the old learned tasks.
I think that this work might interest the community since such methods might be part of the tools that practitioners have in order to cope with learning new tasks without destroying the previous ones. What is missing is the following: I think that without any additional effort, a network can learn a new task in parallel to other task, or some other techniques may be used which are not bound to any algebraic methods. Therefore, my only concern is that in this comparison the work bounded to very specific group of methods, and the question of what is the best method for continual learning remained open. |
ICLR | Title
Improved generalization by noise enhancement
Abstract
Recent studies have demonstrated that noise in stochastic gradient descent (SGD) is closely related to generalization: A larger SGD noise, if not too large, results in better generalization. Since the covariance of the SGD noise is proportional to η/B, where η is the learning rate and B is the minibatch size of SGD, the SGD noise has so far been controlled by changing η and/or B. However, too large η results in instability in the training dynamics and a small B prevents scalable parallel computation. It is thus desirable to develop a method of controlling the SGD noise without changing η and B. In this paper, we propose a method that achieves this goal using “noise enhancement”, which is easily implemented in practice. We expound the underlying theoretical idea and demonstrate that the noise enhancement actually improves generalization for real datasets. It turns out that large-batch training with the noise enhancement even shows better generalization compared with small-batch training.
1 INTRODUCTION
It is a big theoretical challenge in deep learning studies to understand why networks trained via stochastic gradient descent (SGD) and its variants generalize so well in the overparameterized regime, in which the number of network parameters greatly exceeds that of the training data samples (Zhang et al., 2017). This fundamental problem has been tackled from different points of view (Dziugaite & Roy, 2017; Nagarajan & Kolter, 2017; Neyshabur et al., 2017; 2019; Arora et al., 2018; Pérez et al., 2019; Jacot et al., 2018; Arora et al., 2019; D’Ascoli et al., 2020). Among them, some recent studies have pointed out the importance of an implicit regularization effect of SGD (Zhu et al., 2019; Wu et al., 2019; Smith et al., 2020). Indeed, it is empirically known that the SGD noise strength is strongly correlated with generalization of the trained network (Li et al., 2017; Jastrzȩbski et al., 2017; Goyal et al., 2017; Smith & Le, 2018; Hoffer et al., 2017; 2019). It has also been argued that the SGD noise prefers wide flat minima, which are considered to indicate good generalization (Keskar et al., 2017; Hoffer et al., 2017; Wu et al., 2018). From this viewpoint, not only its strength, but also the structure of the SGD noise is considered to be important since it is theoretically shown that the network can efficiently escape from bad local minima with the help of the SGD noise but not of an isotropic Gaussian noise with the same strength (Zhu et al., 2019; Wu et al., 2019).
The covariance of the SGD noise is proportional to η2/B, where η andB denote the learning rate and the minibatch size, respectively, and hence, the SGD noise strength can be controlled by changing η and/or B. To realize good generalization, we want to increase the SGD noise strength by increasing η and/or decreasing B. However, when η becomes too large, the training dynamics often becomes unstable and the training fails. On the other hand, decreasing B prevents an efficient parallelization using multiple GPUs or TPUs.1 It is therefore desirable to control the SGD noise without changing these hyperparameters.
The main contribution of the present paper is to show that the SGD noise can be controlled without changing η and B by a simple yet efficient method that we call noise enhancement. In this method, the gradient of the loss function is evaluated by using two independent minibatches. We will explain our theoretical idea in Sec. 2. We will also demonstrate that the noise enhancement improves
1However, it is not at all trivial whether the large-batch training is really efficient even with an ideal parallelization. See Golmant et al. (2018); Hoffer et al. (2019) for scalability of large-batch training.
generalization in Sec. 3. In particular, it is empirically shown that the large-batch training using the noise enhancement even outperforms the small-batch training. This result gives us some insights into the relation between the SGD noise and generalization, which is discussed in Sec. 4. Because of its simplicity in implementation, this method would also be useful in practice.
2 NOISE ENHANCEMENT
We shall consider a classification problem. The training dataset D = {(x(µ), y(µ))}µ=1,2,...,N consists of pairs of the input data vector x(µ) and its label y(µ). The set of all the network parameters is simply denoted by w. Then the output of the network for a given input x is denoted by f(x;w). The loss function is defined as
L(w) = 1
N
N ∑
µ=1
ℓ ( f(x(µ);w), y(µ) ) ≡ 1
N
N ∑
µ=1
ℓµ(w), (1)
where the function ℓ(·, ·) specifies the loss (in this paper we employ the cross-entropy loss).
In the SGD, the training data is divided into minibatches of size B, and the parameter update is done by using one of them. Let Bt ⊂ {1, 2, . . . , N} with |Bt| = B be a random minibatch chosen at the t-th step, the network parameter wt is updated as
wt+1 = wt − η∇wLBt(wt), LBt(w) = 1
B
∑
µ∈Bt
ℓµ(wt) (2)
in vanilla SGD, where η > 0 is the learning rate. It is also expressed as
wt+1 = wt − η∇wL(wt)− η [∇wLBt(wt)−∇wL(wt)] ≡ wt − η∇wL(wt)− ξt(wt). (3)
Here, ξt corresponds to the SGD noise since its average over samplings of random minibatches is zero: EBt [ξt] = 0. Its covariance is also calculated straightforwardly (Zhu et al., 2019):
EBt
[
ξtξ T t
] = η2
B
N −B
N − 1
(
1
N
N ∑
µ=1
∇wℓµ∇wℓ T µ −∇wL∇wL T
)
≈ η2
B
(
1
N
N ∑
µ=1
∇wℓµ∇wℓ T µ −∇wL∇wL T
)
, (4)
where we assume N ≫ B in obtaining the last expression. This expression2 shows that the SGD noise strength is controlled by η and B.
We want to enhance the SGD noise without changing η and B. Naively, it is possible just by replacing ξt by αξt with a new parameter α > 1. Equation (3) is then written as
wt+1 = wt − η∇wL(wt)− αξt(wt)
= wt − η [α∇wLBt(wt) + (1− α)∇wL(wt)] . (5)
Practically, Eq. (5) would be useless because the computation of ∇wL(wt), i.e. the gradient of the loss function over the entire training data, is required for each iteration.3 Instead, we propose replacing ∇wL(wt) in Eq. (5) by ∇wLB′
t (wt), where B ′ t is another minibatch of the same size B
that is independent of Bt. We thus obtain the following update rule of the noise-enhanced SGD:
wt+1 = wt − η [ α∇wLBt(wt) + (1 − α)∇wLB′t(wt) ] . (6)
2From Eq. (4), some authors (Krizhevsky, 2014; Hoffer et al., 2017) argue that the SGD noise strength is proportional to η/ √ B, while others (Li et al., 2017; Jastrzȩbski et al., 2017; Smith et al., 2018) argue that it is rather proportional to √
η/B on the basis of the stochastic differential equation obtained for an infinitesimal η → +0. Thus the learning-rate dependence of the noise strength is rather complicated.
3If we have computational resources large enough to realize ideal parallelization for full training dataset, this naive noise enhancement would work. However, with limited computational resources, it is not desirable that we have to evaluate ∇wL(wt) for each iteration.
By defining the SGD noise ξ′t associated with B ′ t as
ξ′t(wt) = η [ ∇wLB′ t (wt)−∇wL(wt) ] , (7)
Eq. (6) is rewritten as
wt+1 = wt − η∇wL(wt)− ξ NE t (wt), (8)
where the noise ξNEt in the noise-enhanced SGD is given by
ξNEt = αξt + (1− α)ξ ′ t. (9)
Its mean is obviously zero, i.e. EBt,B′t [ξ NE t ] = 0, and its covariance is given by
EBt,B′t
[
ξNEt ( ξNEt )T ] = α2EBt [ ξtξ T t ] + (1− α2)EB′ t [ ξ′t(ξ ′ t) T ]
= [ α2 + (1− α)2 ]
EBt
[
ξtξ T t
]
, (10)
where we have used the fact that two noises ξt and ξ ′ t are i.i.d. random variables. In this way, the SGD-noise covariance is enhanced by a factor of α2 + (1 − α)2 > 1 for α > 1. Since the size of the new minibatch B′t is same as that of the original minibatch Bt, the noise enhancement does not suffer from any serious computational cost.
If we assume N ≫ B, Eq. (10) is equivalent to Eq. (4) with an effective minibatch size
Beff = B
α2 + (1− α)2 . (11)
If the SGD noise were Gaussian, it would mean that the noise-enhanced SGD is equivalent to the normal SGD with the effective minibatch size Beff . However, the SGD noise is actually far from Gaussian during training (Panigrahi et al., 2019), at least for not too large minibatch size. The noise enhancement is therefore not equivalent to reducing the minibatch size unless Beff is too large.
The procedure of the noise enhancement is summarized as the follows: (i) prepare two independent minibatches Bt and B ′ t, and (ii) replace the minibatch gradient ∇wLBt(wt) by α∇wLBt(wt)+ (1− α)∇wLB′ t (wt). The numerical implementation is quite simple. It should be noted that the noise enhancement is also applicable to other variants of SGD like Adam.
3 EXPERIMENT
We shall demonstrate the efficiency of the method of the noise enhancement (NE) for several network configurations with a real dataset as listed in Table 1.
We describe the details of the network architecture below:
• F1: A fully-connected feed-forward network with 7 hidden layers, each of which has 500 neurons with the ReLU activation. The output layer consists of 10 neurons with the softmax activation.
• C1: A modified version of the VGG configuration (Simonyan & Zisserman, 2014). Following Keskar et al. (2017), let us denote a stack of n convolutional layers of a filters and a kernel size of b × c with the stride length of d by n × [a, b, c, d]. The C1 network uses the configuration: 3× [64, 3, 3, 1], 3× [128, 3, 3, 1], 3× [256, 3, 3, 1], where a MaxPool(2) is applied after each stack. To all layers, the ghost-batch normalization of size 100 and the ReLU activation are applied. Finally, an output layer consists of 10 neurons with the softmax activation.
For all experiments, we used the cross-entropy loss and the Adam optimizer with the default hyperparameters. Neither data augmentation nor weight decay is applied in our experiment. To aid the convergence, we halves the learning rate when the training loss reaches the value L∗. Training finishes when the training loss becomes smaller than the value L∗∗. Our choices of L∗ and L∗∗ are also described in Table 1. The convergence time is defined as the number of iteration steps until the training finishes. Training is repeated 10 times starting from different random initializations (the Glorot initizalization is used), and we measure the mean test accuracy and the mean convergence time as well as their standard deviations.
3.1 EFFECT OF THE NOISE ENHANCEMENT
First we demonstrate how the noise enhancement affects the generalization and the convergence time for C1 (similar results are obtained for F1 and C2 as we show later). For each fixed value of α = 1, 1.5, 2.0, 2.5 (α = 1 means no NE applied) we calculated the mean test accuracy and the mean convergence time for varying minibatch sizes B. The result is presented in Fig. 1. We can see that the NE improves generalization for a not too large α. It is also observed that the generalization gap between small-batch training and large-batch training diminishes by increasing α. The NE with large α is therefore efficient for large-batch training. On the other hand, the convergence time increases with α for a fixed B.
For each fixed α, there is an optimal minibatch size Bopt, which increases with α. In Table 2, we list Bopt ∈ {100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 2000, 3000, 5000} as well as the test
accuracy and the convergence time at B = Bopt. We see that the test accuracy at Bopt is improved by the NE. Moreover, the NE shortens the convergence time at Bopt without hurting generalization performance.4 This experimental observation shows practical efficiency of the method of the NE.
Although we have focused on C1, other configurations F1 and C2 also show similar results. For F1 and C2, we compare the result for α = 1 with that for α = 1.5. In Fig. 2, the minibatch-size dependences of the test accuracy and the convergence time are shown for F1 and C2. In Table 2, we also show the test accuracy and the convergence time at B = Bopt for each α in F1 and C2. These results are qualitatively same as those in C1 (Fig. 1 and Table 2).
3.2 COMPARISON BETWEEN THE NOISE ENHANCEMENT AND REDUCING THE MINIBATCH SIZE
It is pointed out that reducing the minibatch size B with α = 1 has a similar effect as the NE with a fixed B; it results in better generalization but a longer convergence time.5 We shall compare the large-batch training with the NE to the small-batch training without the NE. First we calculate the test accuracy and the convergence time for varying B and a fixed α = 1 (no NE). We then calculate the test accuracy for varying α > 1 and a fixed B = 5000, which corresponds to large minibatch training. In other words, we compare the effect of the NE with that of reducing B.
The comparison between reducing B with α = 1 and increasing α with B = 5000 is given in Fig. 3. We see that both give similar curves; increasing the convergence time with a peaked test accuracy.
4The NE for a fixed B increases the convergence time, but Bopt also increases, which decreases the convergence time.
5As was already mentioned, under the Gaussian noise approximation, increasing α is indeed equivalent to reducing B to Beff given by Eq. (11).
However, in every case of F1, C1, and C2, the NE (increasing α) results in better accuracy compared with reducing B if α is properly chosen.
In Table 3, we compare the best test accuracies between varying B with α = 1 (without the NE) and increasing α with B = 5000 (with the NE). In all cases, the large-batch training with the NE outperforms the small-batch training without the NE.
4 DISCUSSION
We have shown that the method of the NE for gradient-based optimization algorithms improves generalization. In particular, large-batch training with the NE even outperforms small-batch training without the NE, which clearly shows that the NE is not equivalent to reducing the minibatch size B.
In this section, we shall discuss two fundamental questions raised here:
(i) Why does a stronger SGD noise result in a better generalization?
(ii) How is the inequivalence between the NE and reducing B theoretically understood?
We first consider (i). When the SGD noise strength is inhomogeneous in the parameter space, network parameters will be likely to evolve to a minimum of the loss landscape with a weaker SGD noise.6 That is, if the SGD noise is strong enough near a minimum, the network parameters will easily escape from it with the help of the SGD noise. As a result, only minima around which the
6In physics, similar phenomena are known; Brownian particles in a medium with inhomogeneous temperature tend to gather in a colder region (Soret effect) (Duhr & Braun, 2006; Sancho, 2015).
SGD noise is weak enough survive. Since the covariance of the SGD noise is given by Eq. (4), or Eq. (10) for the NE, the strong SGD noise is considered to have an implicit regularization effect toward minima with a small variance of {∇wℓµ}. Some previous studies have introduced various measures which express an implicit regularization effect of SGD (Keskar et al., 2017; Yin et al., 2018; Wu et al., 2018). Among them, the “gradient diversity” introduced by Yin et al. (2018) is closely related to the above argument.
A small variance of the sample-dependent gradients {∇wℓµ} around a minimum of the loss function implies that the loss landscape LB(w) for a minibatch B does not largely depend on B. Such a minimum would contain information on common features among training data samples, which would be relevant for a given classification, but not contain information on sample-specific features which lead to overfitting. This is our intuitive picture that explains why the strong SGD noise results in good generalization performance.
The above consideration is solely based on Eq. (4), i.e., the covariance structure of the SGD noise, and the effect of non-Gaussian noise has been ignored. However, when the SGD noise is strengthened by reducing B, the SGD noise deviates from Gaussian and the above argument should be somehow modified. As we have already mentioned, the inequivalence between the NE and reducing B results from the non-Gaussian nature of the SGD noise, which is therefore a key ingredient to answer the question (ii). The method of the NE can increase the noise strength without changing B, and hence it is considered to suppress the non-Gaussianity compared with the case of just reducing B. The experimental result presented in Sec. 3 then indicates that the non-Gaussian nature of the SGD noise has a negative impact on generalization. A possible interpretation is that sample-specific features show up and are overestimated, which results in overfitting, when the central limit theorem is strongly violated.7 However, the relation between the non-Gaussianity of the SGD noise and generalization remains unclear (Wu et al., 2019), and it would be an important future problem to make this point clear.
In this way, we now have intuitive arguments which might be relevant to answer the questions (i) and (ii), but theoretical solid explanations are still lacking. Our results will not only be useful in practice, but also give theoretical insights into those fundamental questions, which merit further study. | 1. What is the main contribution of the paper, and how does it aim to improve the generalization performance of mini-batch SGD?
2. What are the concerns regarding the experimental validation and comparison with other works in the literature?
3. How does the proposed method differ from other variants of SGD, such as Adam, and how does this impact the analysis and results?
4. What is the significance of the intrinsic noise of SGD, and how does the paper's approach address this aspect?
5. Are there any issues with the presentation and organization of the paper, such as the length and lack of a formal algorithm box?
6. How do the authors justify the use of ghost-batch normalization, and why is it only applied to certain network architectures?
7. Why did the authors choose to conduct their experiments using Adam instead of SGD, and what are the implications of this choice?
8. How do the results of the paper compare to other works that have modified mini-batch SGD with a noise term to improve generalization? | Review | Review
Summary: This paper proposes an approach to improve the generalization performance of mini-batch SGD. The idea is simple: the mini-batch gradient is replaced by a convex sum of two mini-batch gradients where the mini-batches are sampled independently from each other. Using this "noise-enhanced" approach, the authors claim that this enables a way to monitor the intrinsic noise of SGD without scrutinizing on the batch-size and learning rate hyper-parameters.
My overall assessment of this paper is that it is not at the level of ICLR; while the proposed method seems like a neat idea, there is no theoretical justifications as well as some serious flaws with the experimental validation.
Detailed Comments:
The paper appears to be somewhat unfinished; ends at 6.5 pages?
The sentence at the end of Section 2 - "noise enhancement is also applicable to other variants of SGD like Adam". I am really not sure about this- there are many works in the last few years concerning about the distinctions between Adam and SGD. These two algorithms behave very differently from each other; for both optimization and generalization performance. With regards to this paper, the intrinsic noise of SGD in Eq. 4 is very different from the intrinsic noise of Adam. I have derived the intrinsic noise of Adam in my own research (following the steps in the derivation of Eq. 4 + a lot more tedious algebraic manipulations) and the final form is very different from Eq. 4. Hence, I am skeptical of how the analysis in the remainder of Section 2, which are completely SGD specific, carry over to Adam.
Since the authors are proposing a new algorithm, it would be better if the authors have a formal "Algorithm box" describing their algorithm in Section 2 with proper pseudo-code
In C1 description of the network architecture at the beginning of Section 3, the authors mention that ghost-batch normalization is used; I believe that this is the one that is defined in [1`]. It should be properly introduced and defined in the paper. GBN is typically used in the large-batch regime; I am wondering at which batch-sizes the authors are applying this? Also, is GBN not used in architectures F1, C2?
The experiments are all done in Adam and not SGD? I believe that this leads to a serious gap between the experiments in Section 3 and all of the analysis and discussions in the rest of the paper. Almost all of the papers in the literature which study SGD noise, batch-size/learning rate effects on generalization conduct their experiments with SGD.
Since this paper proposes to modify mini-batch SGD with a noise term to improve generalization, I would encourage the authors to do some empirical comparisons with other papers which also have done this; for example, [2] and [3].
References:
[1] Hoffer, Elad, Itay Hubara, and Daniel Soudry. "Train longer, generalize better: closing the generalization gap in large batch training of neural networks." Advances in Neural Information Processing Systems. 2017.
[2] Wen, Yeming, et al. "Interplay between optimization and generalization of stochastic gradient descent with covariance noise." arXiv preprint arXiv:1902.08234 (2019).
[3] Zhu, Zhanxing, et al. "The Anisotropic Noise in Stochastic Gradient Descent: Its Behavior of Escaping from Sharp Minima and Regularization Effects." arXiv preprint arXiv:1803.00195 (2018). |
ICLR | Title
Improved generalization by noise enhancement
Abstract
Recent studies have demonstrated that noise in stochastic gradient descent (SGD) is closely related to generalization: A larger SGD noise, if not too large, results in better generalization. Since the covariance of the SGD noise is proportional to η/B, where η is the learning rate and B is the minibatch size of SGD, the SGD noise has so far been controlled by changing η and/or B. However, too large η results in instability in the training dynamics and a small B prevents scalable parallel computation. It is thus desirable to develop a method of controlling the SGD noise without changing η and B. In this paper, we propose a method that achieves this goal using “noise enhancement”, which is easily implemented in practice. We expound the underlying theoretical idea and demonstrate that the noise enhancement actually improves generalization for real datasets. It turns out that large-batch training with the noise enhancement even shows better generalization compared with small-batch training.
1 INTRODUCTION
It is a big theoretical challenge in deep learning studies to understand why networks trained via stochastic gradient descent (SGD) and its variants generalize so well in the overparameterized regime, in which the number of network parameters greatly exceeds that of the training data samples (Zhang et al., 2017). This fundamental problem has been tackled from different points of view (Dziugaite & Roy, 2017; Nagarajan & Kolter, 2017; Neyshabur et al., 2017; 2019; Arora et al., 2018; Pérez et al., 2019; Jacot et al., 2018; Arora et al., 2019; D’Ascoli et al., 2020). Among them, some recent studies have pointed out the importance of an implicit regularization effect of SGD (Zhu et al., 2019; Wu et al., 2019; Smith et al., 2020). Indeed, it is empirically known that the SGD noise strength is strongly correlated with generalization of the trained network (Li et al., 2017; Jastrzȩbski et al., 2017; Goyal et al., 2017; Smith & Le, 2018; Hoffer et al., 2017; 2019). It has also been argued that the SGD noise prefers wide flat minima, which are considered to indicate good generalization (Keskar et al., 2017; Hoffer et al., 2017; Wu et al., 2018). From this viewpoint, not only its strength, but also the structure of the SGD noise is considered to be important since it is theoretically shown that the network can efficiently escape from bad local minima with the help of the SGD noise but not of an isotropic Gaussian noise with the same strength (Zhu et al., 2019; Wu et al., 2019).
The covariance of the SGD noise is proportional to η2/B, where η andB denote the learning rate and the minibatch size, respectively, and hence, the SGD noise strength can be controlled by changing η and/or B. To realize good generalization, we want to increase the SGD noise strength by increasing η and/or decreasing B. However, when η becomes too large, the training dynamics often becomes unstable and the training fails. On the other hand, decreasing B prevents an efficient parallelization using multiple GPUs or TPUs.1 It is therefore desirable to control the SGD noise without changing these hyperparameters.
The main contribution of the present paper is to show that the SGD noise can be controlled without changing η and B by a simple yet efficient method that we call noise enhancement. In this method, the gradient of the loss function is evaluated by using two independent minibatches. We will explain our theoretical idea in Sec. 2. We will also demonstrate that the noise enhancement improves
1However, it is not at all trivial whether the large-batch training is really efficient even with an ideal parallelization. See Golmant et al. (2018); Hoffer et al. (2019) for scalability of large-batch training.
generalization in Sec. 3. In particular, it is empirically shown that the large-batch training using the noise enhancement even outperforms the small-batch training. This result gives us some insights into the relation between the SGD noise and generalization, which is discussed in Sec. 4. Because of its simplicity in implementation, this method would also be useful in practice.
2 NOISE ENHANCEMENT
We shall consider a classification problem. The training dataset D = {(x(µ), y(µ))}µ=1,2,...,N consists of pairs of the input data vector x(µ) and its label y(µ). The set of all the network parameters is simply denoted by w. Then the output of the network for a given input x is denoted by f(x;w). The loss function is defined as
L(w) = 1
N
N ∑
µ=1
ℓ ( f(x(µ);w), y(µ) ) ≡ 1
N
N ∑
µ=1
ℓµ(w), (1)
where the function ℓ(·, ·) specifies the loss (in this paper we employ the cross-entropy loss).
In the SGD, the training data is divided into minibatches of size B, and the parameter update is done by using one of them. Let Bt ⊂ {1, 2, . . . , N} with |Bt| = B be a random minibatch chosen at the t-th step, the network parameter wt is updated as
wt+1 = wt − η∇wLBt(wt), LBt(w) = 1
B
∑
µ∈Bt
ℓµ(wt) (2)
in vanilla SGD, where η > 0 is the learning rate. It is also expressed as
wt+1 = wt − η∇wL(wt)− η [∇wLBt(wt)−∇wL(wt)] ≡ wt − η∇wL(wt)− ξt(wt). (3)
Here, ξt corresponds to the SGD noise since its average over samplings of random minibatches is zero: EBt [ξt] = 0. Its covariance is also calculated straightforwardly (Zhu et al., 2019):
EBt
[
ξtξ T t
] = η2
B
N −B
N − 1
(
1
N
N ∑
µ=1
∇wℓµ∇wℓ T µ −∇wL∇wL T
)
≈ η2
B
(
1
N
N ∑
µ=1
∇wℓµ∇wℓ T µ −∇wL∇wL T
)
, (4)
where we assume N ≫ B in obtaining the last expression. This expression2 shows that the SGD noise strength is controlled by η and B.
We want to enhance the SGD noise without changing η and B. Naively, it is possible just by replacing ξt by αξt with a new parameter α > 1. Equation (3) is then written as
wt+1 = wt − η∇wL(wt)− αξt(wt)
= wt − η [α∇wLBt(wt) + (1− α)∇wL(wt)] . (5)
Practically, Eq. (5) would be useless because the computation of ∇wL(wt), i.e. the gradient of the loss function over the entire training data, is required for each iteration.3 Instead, we propose replacing ∇wL(wt) in Eq. (5) by ∇wLB′
t (wt), where B ′ t is another minibatch of the same size B
that is independent of Bt. We thus obtain the following update rule of the noise-enhanced SGD:
wt+1 = wt − η [ α∇wLBt(wt) + (1 − α)∇wLB′t(wt) ] . (6)
2From Eq. (4), some authors (Krizhevsky, 2014; Hoffer et al., 2017) argue that the SGD noise strength is proportional to η/ √ B, while others (Li et al., 2017; Jastrzȩbski et al., 2017; Smith et al., 2018) argue that it is rather proportional to √
η/B on the basis of the stochastic differential equation obtained for an infinitesimal η → +0. Thus the learning-rate dependence of the noise strength is rather complicated.
3If we have computational resources large enough to realize ideal parallelization for full training dataset, this naive noise enhancement would work. However, with limited computational resources, it is not desirable that we have to evaluate ∇wL(wt) for each iteration.
By defining the SGD noise ξ′t associated with B ′ t as
ξ′t(wt) = η [ ∇wLB′ t (wt)−∇wL(wt) ] , (7)
Eq. (6) is rewritten as
wt+1 = wt − η∇wL(wt)− ξ NE t (wt), (8)
where the noise ξNEt in the noise-enhanced SGD is given by
ξNEt = αξt + (1− α)ξ ′ t. (9)
Its mean is obviously zero, i.e. EBt,B′t [ξ NE t ] = 0, and its covariance is given by
EBt,B′t
[
ξNEt ( ξNEt )T ] = α2EBt [ ξtξ T t ] + (1− α2)EB′ t [ ξ′t(ξ ′ t) T ]
= [ α2 + (1− α)2 ]
EBt
[
ξtξ T t
]
, (10)
where we have used the fact that two noises ξt and ξ ′ t are i.i.d. random variables. In this way, the SGD-noise covariance is enhanced by a factor of α2 + (1 − α)2 > 1 for α > 1. Since the size of the new minibatch B′t is same as that of the original minibatch Bt, the noise enhancement does not suffer from any serious computational cost.
If we assume N ≫ B, Eq. (10) is equivalent to Eq. (4) with an effective minibatch size
Beff = B
α2 + (1− α)2 . (11)
If the SGD noise were Gaussian, it would mean that the noise-enhanced SGD is equivalent to the normal SGD with the effective minibatch size Beff . However, the SGD noise is actually far from Gaussian during training (Panigrahi et al., 2019), at least for not too large minibatch size. The noise enhancement is therefore not equivalent to reducing the minibatch size unless Beff is too large.
The procedure of the noise enhancement is summarized as the follows: (i) prepare two independent minibatches Bt and B ′ t, and (ii) replace the minibatch gradient ∇wLBt(wt) by α∇wLBt(wt)+ (1− α)∇wLB′ t (wt). The numerical implementation is quite simple. It should be noted that the noise enhancement is also applicable to other variants of SGD like Adam.
3 EXPERIMENT
We shall demonstrate the efficiency of the method of the noise enhancement (NE) for several network configurations with a real dataset as listed in Table 1.
We describe the details of the network architecture below:
• F1: A fully-connected feed-forward network with 7 hidden layers, each of which has 500 neurons with the ReLU activation. The output layer consists of 10 neurons with the softmax activation.
• C1: A modified version of the VGG configuration (Simonyan & Zisserman, 2014). Following Keskar et al. (2017), let us denote a stack of n convolutional layers of a filters and a kernel size of b × c with the stride length of d by n × [a, b, c, d]. The C1 network uses the configuration: 3× [64, 3, 3, 1], 3× [128, 3, 3, 1], 3× [256, 3, 3, 1], where a MaxPool(2) is applied after each stack. To all layers, the ghost-batch normalization of size 100 and the ReLU activation are applied. Finally, an output layer consists of 10 neurons with the softmax activation.
For all experiments, we used the cross-entropy loss and the Adam optimizer with the default hyperparameters. Neither data augmentation nor weight decay is applied in our experiment. To aid the convergence, we halves the learning rate when the training loss reaches the value L∗. Training finishes when the training loss becomes smaller than the value L∗∗. Our choices of L∗ and L∗∗ are also described in Table 1. The convergence time is defined as the number of iteration steps until the training finishes. Training is repeated 10 times starting from different random initializations (the Glorot initizalization is used), and we measure the mean test accuracy and the mean convergence time as well as their standard deviations.
3.1 EFFECT OF THE NOISE ENHANCEMENT
First we demonstrate how the noise enhancement affects the generalization and the convergence time for C1 (similar results are obtained for F1 and C2 as we show later). For each fixed value of α = 1, 1.5, 2.0, 2.5 (α = 1 means no NE applied) we calculated the mean test accuracy and the mean convergence time for varying minibatch sizes B. The result is presented in Fig. 1. We can see that the NE improves generalization for a not too large α. It is also observed that the generalization gap between small-batch training and large-batch training diminishes by increasing α. The NE with large α is therefore efficient for large-batch training. On the other hand, the convergence time increases with α for a fixed B.
For each fixed α, there is an optimal minibatch size Bopt, which increases with α. In Table 2, we list Bopt ∈ {100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 2000, 3000, 5000} as well as the test
accuracy and the convergence time at B = Bopt. We see that the test accuracy at Bopt is improved by the NE. Moreover, the NE shortens the convergence time at Bopt without hurting generalization performance.4 This experimental observation shows practical efficiency of the method of the NE.
Although we have focused on C1, other configurations F1 and C2 also show similar results. For F1 and C2, we compare the result for α = 1 with that for α = 1.5. In Fig. 2, the minibatch-size dependences of the test accuracy and the convergence time are shown for F1 and C2. In Table 2, we also show the test accuracy and the convergence time at B = Bopt for each α in F1 and C2. These results are qualitatively same as those in C1 (Fig. 1 and Table 2).
3.2 COMPARISON BETWEEN THE NOISE ENHANCEMENT AND REDUCING THE MINIBATCH SIZE
It is pointed out that reducing the minibatch size B with α = 1 has a similar effect as the NE with a fixed B; it results in better generalization but a longer convergence time.5 We shall compare the large-batch training with the NE to the small-batch training without the NE. First we calculate the test accuracy and the convergence time for varying B and a fixed α = 1 (no NE). We then calculate the test accuracy for varying α > 1 and a fixed B = 5000, which corresponds to large minibatch training. In other words, we compare the effect of the NE with that of reducing B.
The comparison between reducing B with α = 1 and increasing α with B = 5000 is given in Fig. 3. We see that both give similar curves; increasing the convergence time with a peaked test accuracy.
4The NE for a fixed B increases the convergence time, but Bopt also increases, which decreases the convergence time.
5As was already mentioned, under the Gaussian noise approximation, increasing α is indeed equivalent to reducing B to Beff given by Eq. (11).
However, in every case of F1, C1, and C2, the NE (increasing α) results in better accuracy compared with reducing B if α is properly chosen.
In Table 3, we compare the best test accuracies between varying B with α = 1 (without the NE) and increasing α with B = 5000 (with the NE). In all cases, the large-batch training with the NE outperforms the small-batch training without the NE.
4 DISCUSSION
We have shown that the method of the NE for gradient-based optimization algorithms improves generalization. In particular, large-batch training with the NE even outperforms small-batch training without the NE, which clearly shows that the NE is not equivalent to reducing the minibatch size B.
In this section, we shall discuss two fundamental questions raised here:
(i) Why does a stronger SGD noise result in a better generalization?
(ii) How is the inequivalence between the NE and reducing B theoretically understood?
We first consider (i). When the SGD noise strength is inhomogeneous in the parameter space, network parameters will be likely to evolve to a minimum of the loss landscape with a weaker SGD noise.6 That is, if the SGD noise is strong enough near a minimum, the network parameters will easily escape from it with the help of the SGD noise. As a result, only minima around which the
6In physics, similar phenomena are known; Brownian particles in a medium with inhomogeneous temperature tend to gather in a colder region (Soret effect) (Duhr & Braun, 2006; Sancho, 2015).
SGD noise is weak enough survive. Since the covariance of the SGD noise is given by Eq. (4), or Eq. (10) for the NE, the strong SGD noise is considered to have an implicit regularization effect toward minima with a small variance of {∇wℓµ}. Some previous studies have introduced various measures which express an implicit regularization effect of SGD (Keskar et al., 2017; Yin et al., 2018; Wu et al., 2018). Among them, the “gradient diversity” introduced by Yin et al. (2018) is closely related to the above argument.
A small variance of the sample-dependent gradients {∇wℓµ} around a minimum of the loss function implies that the loss landscape LB(w) for a minibatch B does not largely depend on B. Such a minimum would contain information on common features among training data samples, which would be relevant for a given classification, but not contain information on sample-specific features which lead to overfitting. This is our intuitive picture that explains why the strong SGD noise results in good generalization performance.
The above consideration is solely based on Eq. (4), i.e., the covariance structure of the SGD noise, and the effect of non-Gaussian noise has been ignored. However, when the SGD noise is strengthened by reducing B, the SGD noise deviates from Gaussian and the above argument should be somehow modified. As we have already mentioned, the inequivalence between the NE and reducing B results from the non-Gaussian nature of the SGD noise, which is therefore a key ingredient to answer the question (ii). The method of the NE can increase the noise strength without changing B, and hence it is considered to suppress the non-Gaussianity compared with the case of just reducing B. The experimental result presented in Sec. 3 then indicates that the non-Gaussian nature of the SGD noise has a negative impact on generalization. A possible interpretation is that sample-specific features show up and are overestimated, which results in overfitting, when the central limit theorem is strongly violated.7 However, the relation between the non-Gaussianity of the SGD noise and generalization remains unclear (Wu et al., 2019), and it would be an important future problem to make this point clear.
In this way, we now have intuitive arguments which might be relevant to answer the questions (i) and (ii), but theoretical solid explanations are still lacking. Our results will not only be useful in practice, but also give theoretical insights into those fundamental questions, which merit further study. | 1. What is the main contribution of the paper, and how does it improve generalization performance?
2. What are the strengths and limitations of the proposed method?
3. How does the reviewer assess the experimental analysis, and what improvements could be made?
4. Are there any questions regarding the applicability of the idea on bigger models and standard training setups?
5. What additional experiments or analyses would help to clarify the results and extend the study? | Review | Review
In this paper, the authors propose a method for enhancing the noise in an SGD update to improve generalization, without changing the learning rate or the batch size hyperparameters. Experiments are run on three datasets to show improvement in generalization performance when changing the hyperparameter controlling the amount of noise.
I find the main idea interesting and potentially useful because of its simplicity. The paper is also nicely written and easy to read. The main limitation of the paper is its experimental analysis. Only small models are considered that do not get up to high accuracy for datasets like CIFAR-10 and CIFAR-100. So the applicability of the idea on bigger models is not explored. In addition, a standard training setup is not used for these models. Instead of training for a fixed number of epochs with a preset learning rate schedule determined by the number of steps, the authors define two loss values L* and L**, which are seemingly randomly chosen, to determine when the learning rate is decreased and when training stops. So it is not clear to me whether the proposed idea works in general even on the small models considered.
I would encourage the authors to do a more thorough experimental study of their idea to show it robustly works across different models and datasets under standard training setups. Please also see a list of my questions below, which would be good to clarify.
A few important questions:
It is not immediately clear to me that the analysis shown here also extends to Adam. Can the authors elaborate on it a bit more and show if the same derivations hold or not? This seems especially relevant given that all experiments are run with Adam.
Is the initial learning rate changed as the batch size is increased in the experiments? If it is, how is it changed? This seems important to consider since the effect of alpha would depend on this.
The analysis depends on the minibatches being randomly sampled from the dataset. What sampling strategy was used for the experiments? In practice, random orderings are used instead of random sampling, particularly as the dataset size increases. Does this affect the results?
Additional questions about the experiments that would be good to clarify:
Is the learning rate kept fixed as alpha is changed? How is this initial learning rate chosen?
There seems to be an optimal alpha (alpha = 2) for the experiments shown in figure 1. However, for the rest of the experiments alpha = 1.5 was used. Was this value optimal for the other problems? It would have helped to have done a sweep of alpha across different problems to show how much the optimal alpha varied.
Another relatively minor point: standard regularization techniques like data augmentation or weight decay are not used in the experiments. Does the effect of the noise enhancement still exist when using weight decay?
Other minor comments:
In the paper, it is mentioned that the covariance of the SGD noise is proportional to eta^2/B (please add a citation when this statement is first made in the introduction). From what I understand of the prior literature, the particular scaling relationship between the learning rate and the batch size depends on the structure of the noise that is assumed, and empirically, the eta/B scaling seems to hold up more consistently in the literature. Given that the authors do not really use this scaling relationship in their method (except to make the point that the noise can be controlled by changing either the learning rate or the batch size), it is probably best to mention both scaling laws in the introduction and elsewhere with the relevant citations.
It would have been interesting to explicitly show in the experiments how different the performance of using a batch size of B_eff = B/(alpha^2 + (1-alpha)^2) is compared to using noise enhancement using parameter alpha.
Typo right above equation 2: "In the SGD" -> In SGD |
ICLR | Title
Improved generalization by noise enhancement
Abstract
Recent studies have demonstrated that noise in stochastic gradient descent (SGD) is closely related to generalization: A larger SGD noise, if not too large, results in better generalization. Since the covariance of the SGD noise is proportional to η/B, where η is the learning rate and B is the minibatch size of SGD, the SGD noise has so far been controlled by changing η and/or B. However, too large η results in instability in the training dynamics and a small B prevents scalable parallel computation. It is thus desirable to develop a method of controlling the SGD noise without changing η and B. In this paper, we propose a method that achieves this goal using “noise enhancement”, which is easily implemented in practice. We expound the underlying theoretical idea and demonstrate that the noise enhancement actually improves generalization for real datasets. It turns out that large-batch training with the noise enhancement even shows better generalization compared with small-batch training.
1 INTRODUCTION
It is a big theoretical challenge in deep learning studies to understand why networks trained via stochastic gradient descent (SGD) and its variants generalize so well in the overparameterized regime, in which the number of network parameters greatly exceeds that of the training data samples (Zhang et al., 2017). This fundamental problem has been tackled from different points of view (Dziugaite & Roy, 2017; Nagarajan & Kolter, 2017; Neyshabur et al., 2017; 2019; Arora et al., 2018; Pérez et al., 2019; Jacot et al., 2018; Arora et al., 2019; D’Ascoli et al., 2020). Among them, some recent studies have pointed out the importance of an implicit regularization effect of SGD (Zhu et al., 2019; Wu et al., 2019; Smith et al., 2020). Indeed, it is empirically known that the SGD noise strength is strongly correlated with generalization of the trained network (Li et al., 2017; Jastrzȩbski et al., 2017; Goyal et al., 2017; Smith & Le, 2018; Hoffer et al., 2017; 2019). It has also been argued that the SGD noise prefers wide flat minima, which are considered to indicate good generalization (Keskar et al., 2017; Hoffer et al., 2017; Wu et al., 2018). From this viewpoint, not only its strength, but also the structure of the SGD noise is considered to be important since it is theoretically shown that the network can efficiently escape from bad local minima with the help of the SGD noise but not of an isotropic Gaussian noise with the same strength (Zhu et al., 2019; Wu et al., 2019).
The covariance of the SGD noise is proportional to η2/B, where η andB denote the learning rate and the minibatch size, respectively, and hence, the SGD noise strength can be controlled by changing η and/or B. To realize good generalization, we want to increase the SGD noise strength by increasing η and/or decreasing B. However, when η becomes too large, the training dynamics often becomes unstable and the training fails. On the other hand, decreasing B prevents an efficient parallelization using multiple GPUs or TPUs.1 It is therefore desirable to control the SGD noise without changing these hyperparameters.
The main contribution of the present paper is to show that the SGD noise can be controlled without changing η and B by a simple yet efficient method that we call noise enhancement. In this method, the gradient of the loss function is evaluated by using two independent minibatches. We will explain our theoretical idea in Sec. 2. We will also demonstrate that the noise enhancement improves
1However, it is not at all trivial whether the large-batch training is really efficient even with an ideal parallelization. See Golmant et al. (2018); Hoffer et al. (2019) for scalability of large-batch training.
generalization in Sec. 3. In particular, it is empirically shown that the large-batch training using the noise enhancement even outperforms the small-batch training. This result gives us some insights into the relation between the SGD noise and generalization, which is discussed in Sec. 4. Because of its simplicity in implementation, this method would also be useful in practice.
2 NOISE ENHANCEMENT
We shall consider a classification problem. The training dataset D = {(x(µ), y(µ))}µ=1,2,...,N consists of pairs of the input data vector x(µ) and its label y(µ). The set of all the network parameters is simply denoted by w. Then the output of the network for a given input x is denoted by f(x;w). The loss function is defined as
L(w) = 1
N
N ∑
µ=1
ℓ ( f(x(µ);w), y(µ) ) ≡ 1
N
N ∑
µ=1
ℓµ(w), (1)
where the function ℓ(·, ·) specifies the loss (in this paper we employ the cross-entropy loss).
In the SGD, the training data is divided into minibatches of size B, and the parameter update is done by using one of them. Let Bt ⊂ {1, 2, . . . , N} with |Bt| = B be a random minibatch chosen at the t-th step, the network parameter wt is updated as
wt+1 = wt − η∇wLBt(wt), LBt(w) = 1
B
∑
µ∈Bt
ℓµ(wt) (2)
in vanilla SGD, where η > 0 is the learning rate. It is also expressed as
wt+1 = wt − η∇wL(wt)− η [∇wLBt(wt)−∇wL(wt)] ≡ wt − η∇wL(wt)− ξt(wt). (3)
Here, ξt corresponds to the SGD noise since its average over samplings of random minibatches is zero: EBt [ξt] = 0. Its covariance is also calculated straightforwardly (Zhu et al., 2019):
EBt
[
ξtξ T t
] = η2
B
N −B
N − 1
(
1
N
N ∑
µ=1
∇wℓµ∇wℓ T µ −∇wL∇wL T
)
≈ η2
B
(
1
N
N ∑
µ=1
∇wℓµ∇wℓ T µ −∇wL∇wL T
)
, (4)
where we assume N ≫ B in obtaining the last expression. This expression2 shows that the SGD noise strength is controlled by η and B.
We want to enhance the SGD noise without changing η and B. Naively, it is possible just by replacing ξt by αξt with a new parameter α > 1. Equation (3) is then written as
wt+1 = wt − η∇wL(wt)− αξt(wt)
= wt − η [α∇wLBt(wt) + (1− α)∇wL(wt)] . (5)
Practically, Eq. (5) would be useless because the computation of ∇wL(wt), i.e. the gradient of the loss function over the entire training data, is required for each iteration.3 Instead, we propose replacing ∇wL(wt) in Eq. (5) by ∇wLB′
t (wt), where B ′ t is another minibatch of the same size B
that is independent of Bt. We thus obtain the following update rule of the noise-enhanced SGD:
wt+1 = wt − η [ α∇wLBt(wt) + (1 − α)∇wLB′t(wt) ] . (6)
2From Eq. (4), some authors (Krizhevsky, 2014; Hoffer et al., 2017) argue that the SGD noise strength is proportional to η/ √ B, while others (Li et al., 2017; Jastrzȩbski et al., 2017; Smith et al., 2018) argue that it is rather proportional to √
η/B on the basis of the stochastic differential equation obtained for an infinitesimal η → +0. Thus the learning-rate dependence of the noise strength is rather complicated.
3If we have computational resources large enough to realize ideal parallelization for full training dataset, this naive noise enhancement would work. However, with limited computational resources, it is not desirable that we have to evaluate ∇wL(wt) for each iteration.
By defining the SGD noise ξ′t associated with B ′ t as
ξ′t(wt) = η [ ∇wLB′ t (wt)−∇wL(wt) ] , (7)
Eq. (6) is rewritten as
wt+1 = wt − η∇wL(wt)− ξ NE t (wt), (8)
where the noise ξNEt in the noise-enhanced SGD is given by
ξNEt = αξt + (1− α)ξ ′ t. (9)
Its mean is obviously zero, i.e. EBt,B′t [ξ NE t ] = 0, and its covariance is given by
EBt,B′t
[
ξNEt ( ξNEt )T ] = α2EBt [ ξtξ T t ] + (1− α2)EB′ t [ ξ′t(ξ ′ t) T ]
= [ α2 + (1− α)2 ]
EBt
[
ξtξ T t
]
, (10)
where we have used the fact that two noises ξt and ξ ′ t are i.i.d. random variables. In this way, the SGD-noise covariance is enhanced by a factor of α2 + (1 − α)2 > 1 for α > 1. Since the size of the new minibatch B′t is same as that of the original minibatch Bt, the noise enhancement does not suffer from any serious computational cost.
If we assume N ≫ B, Eq. (10) is equivalent to Eq. (4) with an effective minibatch size
Beff = B
α2 + (1− α)2 . (11)
If the SGD noise were Gaussian, it would mean that the noise-enhanced SGD is equivalent to the normal SGD with the effective minibatch size Beff . However, the SGD noise is actually far from Gaussian during training (Panigrahi et al., 2019), at least for not too large minibatch size. The noise enhancement is therefore not equivalent to reducing the minibatch size unless Beff is too large.
The procedure of the noise enhancement is summarized as the follows: (i) prepare two independent minibatches Bt and B ′ t, and (ii) replace the minibatch gradient ∇wLBt(wt) by α∇wLBt(wt)+ (1− α)∇wLB′ t (wt). The numerical implementation is quite simple. It should be noted that the noise enhancement is also applicable to other variants of SGD like Adam.
3 EXPERIMENT
We shall demonstrate the efficiency of the method of the noise enhancement (NE) for several network configurations with a real dataset as listed in Table 1.
We describe the details of the network architecture below:
• F1: A fully-connected feed-forward network with 7 hidden layers, each of which has 500 neurons with the ReLU activation. The output layer consists of 10 neurons with the softmax activation.
• C1: A modified version of the VGG configuration (Simonyan & Zisserman, 2014). Following Keskar et al. (2017), let us denote a stack of n convolutional layers of a filters and a kernel size of b × c with the stride length of d by n × [a, b, c, d]. The C1 network uses the configuration: 3× [64, 3, 3, 1], 3× [128, 3, 3, 1], 3× [256, 3, 3, 1], where a MaxPool(2) is applied after each stack. To all layers, the ghost-batch normalization of size 100 and the ReLU activation are applied. Finally, an output layer consists of 10 neurons with the softmax activation.
For all experiments, we used the cross-entropy loss and the Adam optimizer with the default hyperparameters. Neither data augmentation nor weight decay is applied in our experiment. To aid the convergence, we halves the learning rate when the training loss reaches the value L∗. Training finishes when the training loss becomes smaller than the value L∗∗. Our choices of L∗ and L∗∗ are also described in Table 1. The convergence time is defined as the number of iteration steps until the training finishes. Training is repeated 10 times starting from different random initializations (the Glorot initizalization is used), and we measure the mean test accuracy and the mean convergence time as well as their standard deviations.
3.1 EFFECT OF THE NOISE ENHANCEMENT
First we demonstrate how the noise enhancement affects the generalization and the convergence time for C1 (similar results are obtained for F1 and C2 as we show later). For each fixed value of α = 1, 1.5, 2.0, 2.5 (α = 1 means no NE applied) we calculated the mean test accuracy and the mean convergence time for varying minibatch sizes B. The result is presented in Fig. 1. We can see that the NE improves generalization for a not too large α. It is also observed that the generalization gap between small-batch training and large-batch training diminishes by increasing α. The NE with large α is therefore efficient for large-batch training. On the other hand, the convergence time increases with α for a fixed B.
For each fixed α, there is an optimal minibatch size Bopt, which increases with α. In Table 2, we list Bopt ∈ {100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 2000, 3000, 5000} as well as the test
accuracy and the convergence time at B = Bopt. We see that the test accuracy at Bopt is improved by the NE. Moreover, the NE shortens the convergence time at Bopt without hurting generalization performance.4 This experimental observation shows practical efficiency of the method of the NE.
Although we have focused on C1, other configurations F1 and C2 also show similar results. For F1 and C2, we compare the result for α = 1 with that for α = 1.5. In Fig. 2, the minibatch-size dependences of the test accuracy and the convergence time are shown for F1 and C2. In Table 2, we also show the test accuracy and the convergence time at B = Bopt for each α in F1 and C2. These results are qualitatively same as those in C1 (Fig. 1 and Table 2).
3.2 COMPARISON BETWEEN THE NOISE ENHANCEMENT AND REDUCING THE MINIBATCH SIZE
It is pointed out that reducing the minibatch size B with α = 1 has a similar effect as the NE with a fixed B; it results in better generalization but a longer convergence time.5 We shall compare the large-batch training with the NE to the small-batch training without the NE. First we calculate the test accuracy and the convergence time for varying B and a fixed α = 1 (no NE). We then calculate the test accuracy for varying α > 1 and a fixed B = 5000, which corresponds to large minibatch training. In other words, we compare the effect of the NE with that of reducing B.
The comparison between reducing B with α = 1 and increasing α with B = 5000 is given in Fig. 3. We see that both give similar curves; increasing the convergence time with a peaked test accuracy.
4The NE for a fixed B increases the convergence time, but Bopt also increases, which decreases the convergence time.
5As was already mentioned, under the Gaussian noise approximation, increasing α is indeed equivalent to reducing B to Beff given by Eq. (11).
However, in every case of F1, C1, and C2, the NE (increasing α) results in better accuracy compared with reducing B if α is properly chosen.
In Table 3, we compare the best test accuracies between varying B with α = 1 (without the NE) and increasing α with B = 5000 (with the NE). In all cases, the large-batch training with the NE outperforms the small-batch training without the NE.
4 DISCUSSION
We have shown that the method of the NE for gradient-based optimization algorithms improves generalization. In particular, large-batch training with the NE even outperforms small-batch training without the NE, which clearly shows that the NE is not equivalent to reducing the minibatch size B.
In this section, we shall discuss two fundamental questions raised here:
(i) Why does a stronger SGD noise result in a better generalization?
(ii) How is the inequivalence between the NE and reducing B theoretically understood?
We first consider (i). When the SGD noise strength is inhomogeneous in the parameter space, network parameters will be likely to evolve to a minimum of the loss landscape with a weaker SGD noise.6 That is, if the SGD noise is strong enough near a minimum, the network parameters will easily escape from it with the help of the SGD noise. As a result, only minima around which the
6In physics, similar phenomena are known; Brownian particles in a medium with inhomogeneous temperature tend to gather in a colder region (Soret effect) (Duhr & Braun, 2006; Sancho, 2015).
SGD noise is weak enough survive. Since the covariance of the SGD noise is given by Eq. (4), or Eq. (10) for the NE, the strong SGD noise is considered to have an implicit regularization effect toward minima with a small variance of {∇wℓµ}. Some previous studies have introduced various measures which express an implicit regularization effect of SGD (Keskar et al., 2017; Yin et al., 2018; Wu et al., 2018). Among them, the “gradient diversity” introduced by Yin et al. (2018) is closely related to the above argument.
A small variance of the sample-dependent gradients {∇wℓµ} around a minimum of the loss function implies that the loss landscape LB(w) for a minibatch B does not largely depend on B. Such a minimum would contain information on common features among training data samples, which would be relevant for a given classification, but not contain information on sample-specific features which lead to overfitting. This is our intuitive picture that explains why the strong SGD noise results in good generalization performance.
The above consideration is solely based on Eq. (4), i.e., the covariance structure of the SGD noise, and the effect of non-Gaussian noise has been ignored. However, when the SGD noise is strengthened by reducing B, the SGD noise deviates from Gaussian and the above argument should be somehow modified. As we have already mentioned, the inequivalence between the NE and reducing B results from the non-Gaussian nature of the SGD noise, which is therefore a key ingredient to answer the question (ii). The method of the NE can increase the noise strength without changing B, and hence it is considered to suppress the non-Gaussianity compared with the case of just reducing B. The experimental result presented in Sec. 3 then indicates that the non-Gaussian nature of the SGD noise has a negative impact on generalization. A possible interpretation is that sample-specific features show up and are overestimated, which results in overfitting, when the central limit theorem is strongly violated.7 However, the relation between the non-Gaussianity of the SGD noise and generalization remains unclear (Wu et al., 2019), and it would be an important future problem to make this point clear.
In this way, we now have intuitive arguments which might be relevant to answer the questions (i) and (ii), but theoretical solid explanations are still lacking. Our results will not only be useful in practice, but also give theoretical insights into those fundamental questions, which merit further study. | 1. What is the focus of the paper regarding semantic correspondence?
2. What are the strengths and weaknesses of the proposed approach in representing neural networks?
3. Do you have any concerns or limitations regarding the NeMF approach?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are the key contributions and novel aspects introduced by the paper in spin glass techniques?
6. What are the weaknesses of the paper compared to prior works?
7. What are the shining points of the paper in dictionary learning?
8. Are there any questions regarding the paper, particularly in theoretical analysis?
9. What is the main contribution of the paper on fixing generalization issues with large mini-batch sizes?
10. What are the positives and negatives of the proposed method, especially regarding its novelty and computational cost? | Review | Review
This paper proposes "noise enhancement", an algorithm intended to fix generalization issues with large mini-batch sizes by adding additional mean-zero SGD noise to the update computed using an additional batch of samples. Experimental results show that adding this noise to the optimization procedure improves the final test performance. The authors intend to use this method in cases when it is hard to increase the learning rate, and decreasing the batch size would result in reduced parallelization.
Positives:
This method is clearly motivated, as a simple calculation shows that adding the noise would cause the covariance to scale up proportionally. Prior work which studies on the SGD noise covariance implies that this would lead to better generalization.
Negatives:
My main concern regards novelty of the proposed method, as experiments with artificially adding noise have been done a lot in the past literature [1, 2, and possibly others]. These papers perform extremely similar experiments based on adding some mean-zero noise to the updates to improve final test performance.
Some details about the computational cost are unclear. For example, the paper plots convergence time v.s. batch size, but how is "convergence" measured? Also how does the noise enhancement affect the wall-clock time?
More justification is required that simply increasing the learning rate eta is not sufficient, which was claimed on the first page "However, when η becomes too large, the training dynamics often becomes unstable and the training fails." In particular, prior work [3] shows that the learning rate can be scaled linearly with batch size for ImageNet, up to quite large batch sizes.
In summary, the novelty of the proposed method is the main reason for my rating.
[1] Wen et. al 2020. An Empirical Study of Large-Batch Stochastic Gradient Descent with Structured Covariance Noise. [2] Wei et. al 2020. The Implicit and Explicit Regularization Effects of Dropout. [3] Goyal et. al 2017. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour |
ICLR | Title
Improved generalization by noise enhancement
Abstract
Recent studies have demonstrated that noise in stochastic gradient descent (SGD) is closely related to generalization: A larger SGD noise, if not too large, results in better generalization. Since the covariance of the SGD noise is proportional to η/B, where η is the learning rate and B is the minibatch size of SGD, the SGD noise has so far been controlled by changing η and/or B. However, too large η results in instability in the training dynamics and a small B prevents scalable parallel computation. It is thus desirable to develop a method of controlling the SGD noise without changing η and B. In this paper, we propose a method that achieves this goal using “noise enhancement”, which is easily implemented in practice. We expound the underlying theoretical idea and demonstrate that the noise enhancement actually improves generalization for real datasets. It turns out that large-batch training with the noise enhancement even shows better generalization compared with small-batch training.
1 INTRODUCTION
It is a big theoretical challenge in deep learning studies to understand why networks trained via stochastic gradient descent (SGD) and its variants generalize so well in the overparameterized regime, in which the number of network parameters greatly exceeds that of the training data samples (Zhang et al., 2017). This fundamental problem has been tackled from different points of view (Dziugaite & Roy, 2017; Nagarajan & Kolter, 2017; Neyshabur et al., 2017; 2019; Arora et al., 2018; Pérez et al., 2019; Jacot et al., 2018; Arora et al., 2019; D’Ascoli et al., 2020). Among them, some recent studies have pointed out the importance of an implicit regularization effect of SGD (Zhu et al., 2019; Wu et al., 2019; Smith et al., 2020). Indeed, it is empirically known that the SGD noise strength is strongly correlated with generalization of the trained network (Li et al., 2017; Jastrzȩbski et al., 2017; Goyal et al., 2017; Smith & Le, 2018; Hoffer et al., 2017; 2019). It has also been argued that the SGD noise prefers wide flat minima, which are considered to indicate good generalization (Keskar et al., 2017; Hoffer et al., 2017; Wu et al., 2018). From this viewpoint, not only its strength, but also the structure of the SGD noise is considered to be important since it is theoretically shown that the network can efficiently escape from bad local minima with the help of the SGD noise but not of an isotropic Gaussian noise with the same strength (Zhu et al., 2019; Wu et al., 2019).
The covariance of the SGD noise is proportional to η2/B, where η andB denote the learning rate and the minibatch size, respectively, and hence, the SGD noise strength can be controlled by changing η and/or B. To realize good generalization, we want to increase the SGD noise strength by increasing η and/or decreasing B. However, when η becomes too large, the training dynamics often becomes unstable and the training fails. On the other hand, decreasing B prevents an efficient parallelization using multiple GPUs or TPUs.1 It is therefore desirable to control the SGD noise without changing these hyperparameters.
The main contribution of the present paper is to show that the SGD noise can be controlled without changing η and B by a simple yet efficient method that we call noise enhancement. In this method, the gradient of the loss function is evaluated by using two independent minibatches. We will explain our theoretical idea in Sec. 2. We will also demonstrate that the noise enhancement improves
1However, it is not at all trivial whether the large-batch training is really efficient even with an ideal parallelization. See Golmant et al. (2018); Hoffer et al. (2019) for scalability of large-batch training.
generalization in Sec. 3. In particular, it is empirically shown that the large-batch training using the noise enhancement even outperforms the small-batch training. This result gives us some insights into the relation between the SGD noise and generalization, which is discussed in Sec. 4. Because of its simplicity in implementation, this method would also be useful in practice.
2 NOISE ENHANCEMENT
We shall consider a classification problem. The training dataset D = {(x(µ), y(µ))}µ=1,2,...,N consists of pairs of the input data vector x(µ) and its label y(µ). The set of all the network parameters is simply denoted by w. Then the output of the network for a given input x is denoted by f(x;w). The loss function is defined as
L(w) = 1
N
N ∑
µ=1
ℓ ( f(x(µ);w), y(µ) ) ≡ 1
N
N ∑
µ=1
ℓµ(w), (1)
where the function ℓ(·, ·) specifies the loss (in this paper we employ the cross-entropy loss).
In the SGD, the training data is divided into minibatches of size B, and the parameter update is done by using one of them. Let Bt ⊂ {1, 2, . . . , N} with |Bt| = B be a random minibatch chosen at the t-th step, the network parameter wt is updated as
wt+1 = wt − η∇wLBt(wt), LBt(w) = 1
B
∑
µ∈Bt
ℓµ(wt) (2)
in vanilla SGD, where η > 0 is the learning rate. It is also expressed as
wt+1 = wt − η∇wL(wt)− η [∇wLBt(wt)−∇wL(wt)] ≡ wt − η∇wL(wt)− ξt(wt). (3)
Here, ξt corresponds to the SGD noise since its average over samplings of random minibatches is zero: EBt [ξt] = 0. Its covariance is also calculated straightforwardly (Zhu et al., 2019):
EBt
[
ξtξ T t
] = η2
B
N −B
N − 1
(
1
N
N ∑
µ=1
∇wℓµ∇wℓ T µ −∇wL∇wL T
)
≈ η2
B
(
1
N
N ∑
µ=1
∇wℓµ∇wℓ T µ −∇wL∇wL T
)
, (4)
where we assume N ≫ B in obtaining the last expression. This expression2 shows that the SGD noise strength is controlled by η and B.
We want to enhance the SGD noise without changing η and B. Naively, it is possible just by replacing ξt by αξt with a new parameter α > 1. Equation (3) is then written as
wt+1 = wt − η∇wL(wt)− αξt(wt)
= wt − η [α∇wLBt(wt) + (1− α)∇wL(wt)] . (5)
Practically, Eq. (5) would be useless because the computation of ∇wL(wt), i.e. the gradient of the loss function over the entire training data, is required for each iteration.3 Instead, we propose replacing ∇wL(wt) in Eq. (5) by ∇wLB′
t (wt), where B ′ t is another minibatch of the same size B
that is independent of Bt. We thus obtain the following update rule of the noise-enhanced SGD:
wt+1 = wt − η [ α∇wLBt(wt) + (1 − α)∇wLB′t(wt) ] . (6)
2From Eq. (4), some authors (Krizhevsky, 2014; Hoffer et al., 2017) argue that the SGD noise strength is proportional to η/ √ B, while others (Li et al., 2017; Jastrzȩbski et al., 2017; Smith et al., 2018) argue that it is rather proportional to √
η/B on the basis of the stochastic differential equation obtained for an infinitesimal η → +0. Thus the learning-rate dependence of the noise strength is rather complicated.
3If we have computational resources large enough to realize ideal parallelization for full training dataset, this naive noise enhancement would work. However, with limited computational resources, it is not desirable that we have to evaluate ∇wL(wt) for each iteration.
By defining the SGD noise ξ′t associated with B ′ t as
ξ′t(wt) = η [ ∇wLB′ t (wt)−∇wL(wt) ] , (7)
Eq. (6) is rewritten as
wt+1 = wt − η∇wL(wt)− ξ NE t (wt), (8)
where the noise ξNEt in the noise-enhanced SGD is given by
ξNEt = αξt + (1− α)ξ ′ t. (9)
Its mean is obviously zero, i.e. EBt,B′t [ξ NE t ] = 0, and its covariance is given by
EBt,B′t
[
ξNEt ( ξNEt )T ] = α2EBt [ ξtξ T t ] + (1− α2)EB′ t [ ξ′t(ξ ′ t) T ]
= [ α2 + (1− α)2 ]
EBt
[
ξtξ T t
]
, (10)
where we have used the fact that two noises ξt and ξ ′ t are i.i.d. random variables. In this way, the SGD-noise covariance is enhanced by a factor of α2 + (1 − α)2 > 1 for α > 1. Since the size of the new minibatch B′t is same as that of the original minibatch Bt, the noise enhancement does not suffer from any serious computational cost.
If we assume N ≫ B, Eq. (10) is equivalent to Eq. (4) with an effective minibatch size
Beff = B
α2 + (1− α)2 . (11)
If the SGD noise were Gaussian, it would mean that the noise-enhanced SGD is equivalent to the normal SGD with the effective minibatch size Beff . However, the SGD noise is actually far from Gaussian during training (Panigrahi et al., 2019), at least for not too large minibatch size. The noise enhancement is therefore not equivalent to reducing the minibatch size unless Beff is too large.
The procedure of the noise enhancement is summarized as the follows: (i) prepare two independent minibatches Bt and B ′ t, and (ii) replace the minibatch gradient ∇wLBt(wt) by α∇wLBt(wt)+ (1− α)∇wLB′ t (wt). The numerical implementation is quite simple. It should be noted that the noise enhancement is also applicable to other variants of SGD like Adam.
3 EXPERIMENT
We shall demonstrate the efficiency of the method of the noise enhancement (NE) for several network configurations with a real dataset as listed in Table 1.
We describe the details of the network architecture below:
• F1: A fully-connected feed-forward network with 7 hidden layers, each of which has 500 neurons with the ReLU activation. The output layer consists of 10 neurons with the softmax activation.
• C1: A modified version of the VGG configuration (Simonyan & Zisserman, 2014). Following Keskar et al. (2017), let us denote a stack of n convolutional layers of a filters and a kernel size of b × c with the stride length of d by n × [a, b, c, d]. The C1 network uses the configuration: 3× [64, 3, 3, 1], 3× [128, 3, 3, 1], 3× [256, 3, 3, 1], where a MaxPool(2) is applied after each stack. To all layers, the ghost-batch normalization of size 100 and the ReLU activation are applied. Finally, an output layer consists of 10 neurons with the softmax activation.
For all experiments, we used the cross-entropy loss and the Adam optimizer with the default hyperparameters. Neither data augmentation nor weight decay is applied in our experiment. To aid the convergence, we halves the learning rate when the training loss reaches the value L∗. Training finishes when the training loss becomes smaller than the value L∗∗. Our choices of L∗ and L∗∗ are also described in Table 1. The convergence time is defined as the number of iteration steps until the training finishes. Training is repeated 10 times starting from different random initializations (the Glorot initizalization is used), and we measure the mean test accuracy and the mean convergence time as well as their standard deviations.
3.1 EFFECT OF THE NOISE ENHANCEMENT
First we demonstrate how the noise enhancement affects the generalization and the convergence time for C1 (similar results are obtained for F1 and C2 as we show later). For each fixed value of α = 1, 1.5, 2.0, 2.5 (α = 1 means no NE applied) we calculated the mean test accuracy and the mean convergence time for varying minibatch sizes B. The result is presented in Fig. 1. We can see that the NE improves generalization for a not too large α. It is also observed that the generalization gap between small-batch training and large-batch training diminishes by increasing α. The NE with large α is therefore efficient for large-batch training. On the other hand, the convergence time increases with α for a fixed B.
For each fixed α, there is an optimal minibatch size Bopt, which increases with α. In Table 2, we list Bopt ∈ {100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 2000, 3000, 5000} as well as the test
accuracy and the convergence time at B = Bopt. We see that the test accuracy at Bopt is improved by the NE. Moreover, the NE shortens the convergence time at Bopt without hurting generalization performance.4 This experimental observation shows practical efficiency of the method of the NE.
Although we have focused on C1, other configurations F1 and C2 also show similar results. For F1 and C2, we compare the result for α = 1 with that for α = 1.5. In Fig. 2, the minibatch-size dependences of the test accuracy and the convergence time are shown for F1 and C2. In Table 2, we also show the test accuracy and the convergence time at B = Bopt for each α in F1 and C2. These results are qualitatively same as those in C1 (Fig. 1 and Table 2).
3.2 COMPARISON BETWEEN THE NOISE ENHANCEMENT AND REDUCING THE MINIBATCH SIZE
It is pointed out that reducing the minibatch size B with α = 1 has a similar effect as the NE with a fixed B; it results in better generalization but a longer convergence time.5 We shall compare the large-batch training with the NE to the small-batch training without the NE. First we calculate the test accuracy and the convergence time for varying B and a fixed α = 1 (no NE). We then calculate the test accuracy for varying α > 1 and a fixed B = 5000, which corresponds to large minibatch training. In other words, we compare the effect of the NE with that of reducing B.
The comparison between reducing B with α = 1 and increasing α with B = 5000 is given in Fig. 3. We see that both give similar curves; increasing the convergence time with a peaked test accuracy.
4The NE for a fixed B increases the convergence time, but Bopt also increases, which decreases the convergence time.
5As was already mentioned, under the Gaussian noise approximation, increasing α is indeed equivalent to reducing B to Beff given by Eq. (11).
However, in every case of F1, C1, and C2, the NE (increasing α) results in better accuracy compared with reducing B if α is properly chosen.
In Table 3, we compare the best test accuracies between varying B with α = 1 (without the NE) and increasing α with B = 5000 (with the NE). In all cases, the large-batch training with the NE outperforms the small-batch training without the NE.
4 DISCUSSION
We have shown that the method of the NE for gradient-based optimization algorithms improves generalization. In particular, large-batch training with the NE even outperforms small-batch training without the NE, which clearly shows that the NE is not equivalent to reducing the minibatch size B.
In this section, we shall discuss two fundamental questions raised here:
(i) Why does a stronger SGD noise result in a better generalization?
(ii) How is the inequivalence between the NE and reducing B theoretically understood?
We first consider (i). When the SGD noise strength is inhomogeneous in the parameter space, network parameters will be likely to evolve to a minimum of the loss landscape with a weaker SGD noise.6 That is, if the SGD noise is strong enough near a minimum, the network parameters will easily escape from it with the help of the SGD noise. As a result, only minima around which the
6In physics, similar phenomena are known; Brownian particles in a medium with inhomogeneous temperature tend to gather in a colder region (Soret effect) (Duhr & Braun, 2006; Sancho, 2015).
SGD noise is weak enough survive. Since the covariance of the SGD noise is given by Eq. (4), or Eq. (10) for the NE, the strong SGD noise is considered to have an implicit regularization effect toward minima with a small variance of {∇wℓµ}. Some previous studies have introduced various measures which express an implicit regularization effect of SGD (Keskar et al., 2017; Yin et al., 2018; Wu et al., 2018). Among them, the “gradient diversity” introduced by Yin et al. (2018) is closely related to the above argument.
A small variance of the sample-dependent gradients {∇wℓµ} around a minimum of the loss function implies that the loss landscape LB(w) for a minibatch B does not largely depend on B. Such a minimum would contain information on common features among training data samples, which would be relevant for a given classification, but not contain information on sample-specific features which lead to overfitting. This is our intuitive picture that explains why the strong SGD noise results in good generalization performance.
The above consideration is solely based on Eq. (4), i.e., the covariance structure of the SGD noise, and the effect of non-Gaussian noise has been ignored. However, when the SGD noise is strengthened by reducing B, the SGD noise deviates from Gaussian and the above argument should be somehow modified. As we have already mentioned, the inequivalence between the NE and reducing B results from the non-Gaussian nature of the SGD noise, which is therefore a key ingredient to answer the question (ii). The method of the NE can increase the noise strength without changing B, and hence it is considered to suppress the non-Gaussianity compared with the case of just reducing B. The experimental result presented in Sec. 3 then indicates that the non-Gaussian nature of the SGD noise has a negative impact on generalization. A possible interpretation is that sample-specific features show up and are overestimated, which results in overfitting, when the central limit theorem is strongly violated.7 However, the relation between the non-Gaussianity of the SGD noise and generalization remains unclear (Wu et al., 2019), and it would be an important future problem to make this point clear.
In this way, we now have intuitive arguments which might be relevant to answer the questions (i) and (ii), but theoretical solid explanations are still lacking. Our results will not only be useful in practice, but also give theoretical insights into those fundamental questions, which merit further study. | 1. What is the main contribution of the paper regarding SGD noise?
2. What are the strengths and weaknesses of the proposed method?
3. How does the reviewer assess the significance of the experimental results?
4. Are there any concerns or questions about the paper's content, such as its focus, clarity, or relevance to the field? | Review | Review
This paper follows the line of study on the regularization effect of SGD noise, and proposes an interesting trick to tune the noise scale. The trick is very simple, to enlarge the noise scale, do an extrapolate between gradients of two mini-batches. Yet it has advantages that it does not involve the batch size and learning rate, which can potentially benefit large batch training.
Pros:
The paper proposes a very simple and potentially useful trick for tuning SGD noise magnitude by an extrapolating of two mini-batches. The trick can be useful because (1) it is computational feasible; (2) it keeps the particular structure of SGD noise; (3) it serves as an additional hyperparameter, and does not affect learning rate and batch size. (3) is rather important since enlarging LR can hurt convergence, and reducing batch size can reduce the parallelism. From this point of view, I think this trick really worths to explore in depth.
Cons:
With the above being said, the empirical studies in this version are really disappointing. They only test CIFAR-10 / FashionMNIST with ConvNet / VGG. As an empirical paper, those are far from convincing. I would expect at least (1) ResNet and (2) ImageNet.
Page 2, footnote 2. I do not think there is any complication for measuring the magnitude of SGD noise. The difference is simply because of the different definitions of SGD noise: in stochastic modified equations, they took an
η
as the step size, and treated the rest as the noise.
Page 4, experiment setups. You discuss SGD in the whole paper, but do experiments with Adam??? Please provide formal experiments at least ablation studies for SGD. These are the basic requirements for science.
Several claims are too vague and lack of backups, especially in the Sec. 4. Please do not make statements without justifications. Moreover, I think Wu et. al 2019 do claim the class of noise is not important for regularization, which makes me confusing why you cite this one in the second to the last paragraph.
Overall: I think the paper has potential, given a complete and rigorous large-scale empirical study. However, the current version is too weak for an acceptance. I suggest the authors to re-write the paper with less vague statements as well. |
ICLR | Title
On $\mathcal{O}(1/K)$ Convergence and Low Sample Complexity for Single-Timescale Policy Evaluation with Nonlinear Function Approximation
Abstract
Learning an accurate value function for a given policy is a critical step in solving reinforcement learning (RL) problems. So far, however, the convergence speed and sample complexity performances of most existing policy evaluation algorithms remain unsatisfactory, particularly with non-linear function approximation. This challenge motivates us to develop a new variance-reduced primal-dual method (VRPD) that is able to achieve a fast convergence speed for RL policy evaluation with nonlinear function approximation. To lower the high sample complexity limitation of variance-reduced approaches (due to the periodic full gradient evaluation with all training data), we further propose an enhanced VRPD method with an adaptive-batch adjustment (VRPD). The main features of VRPD include: i) VRPD allows the use of constant step sizes and achieves the O(1/K) convergence rate to the first-order stationary points of non-convex policy evaluation problems; ii) VRPD is a generic single-timescale algorithm that is also applicable for solving a large class of non-convex strongly-concave minimax optimization problems; iii) By adaptively adjusting the batch size via historical stochastic gradient information, VRPD is more sample-efficient empirically without loss of theoretical convergence rate. Our extensive numerical experiments verify our theoretical findings and showcase the high efficiency of the proposed VRPD and VRPD algorithms compared with the state-of-the-art methods.
1 INTRODUCTION
In recent years, advances in reinforcement learning (RL) have achieved enormous successes in a large number of areas, including healthcare (Petersen et al., 2019; Raghu et al., 2017b), financial recommendation (Theocharous et al., 2015), resources management (Mao et al., 2016; Tesauro et al., 2006) and robotics (Kober et al., 2013; Levine et al., 2016; Raghu et al., 2017a), to name just a few. In RL applications, an agent interacts with an environment and repeats the tasks of observing the current state, performing a policy-based action, receiving a reward, and transition to the next state. A key step in many RL algorithms is the policy evaluation (PE) problem, which aims to learn the value function that estimates the expected long-term accumulative reward for a given policy. Value functions not only explicitly provide the agent’s accumulative rewards, but could also be utilized to update the current policy so that the agent can visit valuable states more frequently (Bertsekas & Tsitsiklis, 1995; Lagoudakis & Parr, 2003). In RL policy evaluation, two of the most important performance metrics are convergence rate and sample complexity. First, since policy evaluation is a subroutine of an overall RL task, developing fast-converging policy evaluation algorithms is of critical importance to the overall efficiency of RL. Second, due to the challenges in collecting a large number of training samples (trajectories of state-action pairs) for policy evaluations in RL, reducing the number of samples (i.e., sample complexity) can significantly alleviate the burden of data collection for solving policy evaluation problems. These two important aspects motivate us to pursue a fast-converging policy evaluation algorithm with a low sample-complexity in this paper.
Among various algorithms for policy evaluation, one of the simplest and most effective methods is the temporal difference (TD) learning approach (Sutton, 1988). Instead of focusing on the predicted and actual outcomes, the key idea of the TD learning is to make the difference between temporally successive predictions small. Specifically, the TD learning approach learns the value function by
using the Bellman equation to bootstrap from the current estimated value function. To date, there have been many algorithms proposed within the family of TD learning (Dann et al., 2014). However, most of these methods suffer from either a unstable convergence performance, (e.g., TD(λ) (Sutton, 1988) for off-policy training) or a high computational complexity (e.g., least-squares temporal difference (LSTD) (Boyan, 2002; Bradtke & Barto, 1996)) in training with massive features. The limitation of these early attempts is largely due to the fact that they do not leverage the gradient-oracle in policy evaluation. Thus, in recent years, gradient-based policy evaluation algorithms have become increasingly prevalent. However, the design of efficient gradient-based policy evaluation algorithm is a non-trivial task. On one hand, as an RL task becomes more sophisticated, it is more appropriate to utilize nonlinear function approximation (e.g., deep neural network (DNN)) to model the value function. However, when working with nonlinear DNN models, the convergence performance of the conventional single-timescale TD algorithms may not be guaranteed (Tsitsiklis & Van Roy, 1997). To address this issue, some convergent two-timescale algorithms (Bhatnagar et al., 2009; Chung et al., 2018) have been proposed at the expense of higher implementation complexity. On the other hand, modern policy evaluation tasks could involve a large amount of state transition data. To perform policy evaluation, algorithms typically need to calculate full gradients that require all training data (e.g., gradient temporal difference (GTD) (Sutton et al., 2008) and TD with gradient correction (TDC) (Sutton et al., 2009b)), which entails a high sample complexity. So far, existing works on PE are either focus on linear approximation (GTD2 (Sutton et al., 2009b), PDBG (Du et al., 2017), SVRG (Du et al., 2017), SAGA (Du et al., 2017)) or have such a slower convergence performance (STSG (Qiu et al., 2020), VR-STSG (Qiu et al., 2020), nPD-VR (Wai et al., 2019)) (see detailed discussions in Section. 2). In light of the above limitations, in this paper, we ask the following question: Could we develop an efficient single-timescale gradient-based algorithm for policy evaluation based on nonlinear function approximation?
In this paper, we give an affirmative answer to the above question. Specifically, we propose an efficient gradient-based variance-reduced primal-dual algorithm (VRPD) to tackle the policy evaluation problem with nonlinear function approximation, which we recast as a minimax optimization problem. Our VRPD algorithm admits a simple and elegant single-timescale algorithmic structure. Then, we further enhance VRPD by proposing VRPD+, which uses adaptive batch sizes to relax the periodic full gradient evaluation to further reduce sample complexity. The main contribution of this paper is that our proposed algorithms achieve an O(1/K) convergence rate (K is the number of iterations) with constant step-sizes for policy evaluation with nonlinear function approximation, which is the best-known result in the literature thus far. Our main results are highlighted as follows:
• By utilizing a variance reduction technique, our VRPD algorithm allows constant step-sizes and enjoys a low sample complexity. We show that, under mild assumptions and appropriate parameter choices, VRPD achieves an O(1/K) convergence rate to the first-order stationary point of a class of nonconvex-strongly-concave(NCSC) minimax problems, which is the best-known result in the literature. To achieve this result, our convergence rate analysis introduces new proof techniques and resolves an open question and clarifies an ambiguity in the state-of-the-art convergence analysis of VR-based policy evaluation methods (see 2nd paragraph in Section 2.1 for more discussions).
• VRPD+ significantly improves the sample complexity of the VRPD algorithm for policy evaluation with massive datasets. Our VRPD+ (adaptive-batch VRPD) algorithm incorporates historical information along the optimization path, but does not involve backtracking and condition verification. We show that our VRPD+ algorithm significantly reduces the number of samples and the computation loads of gradients, thanks to our proposed adaptive batch size technique that is able to avoid full gradient evaluation.
• Our extensive experimental results also confirm that our algorithms outperform the state-of-theart gradient-based policy evaluation algorithms, and our VRPD+ can further reduce the sample complexity compared to the VRPD algorithm. It is worth noting that, although the focus of our work is on RL policy evaluation, our algorithmic design and proof techniques contribute to the area of minimax optimization and could be of independent theoretical interest.
2 RELATED WORK
1) TD Learning with Function Approximation for Policy Evaluation: TD learning with function approximation plays a vital role in policy evaluation for RL. The key idea of TD learning is to
minimize the Bellman error for approximating the value function. So far, most existing TD learning algorithms with theoretical guarantees focus on the linear setting (e.g., (Sutton et al., 2009a; Srikant & Ying, 2019; Xu et al., 2020b; Stankovic & Stankovic, 2016; Touati et al., 2018)). Doan et al. (2019), Liu et al. (2015), Macua et al. (2014), and Zhang & Xiao (2019) provided a finite-time analysis for the proposed distributed TD(0) and showed that the convergence rate of their algorithm is O(1/K). It was shown in Du et al. (2017) that policy evaluation with linear function approximation by TD(0) can be formulated as a strongly convex-concave or convex-concave problem, and can be solved by a primal-dual method with a linear convergence rate. However, the linearity assumption cannot be applied in a wide range of policy evaluations with nonlinear models. TD learning with nonlinear (smooth) function approximation is far more complex. Maei et al. (2009) was among the first to propose a general framework for minimizing the generalized mean-squared projected Bellman error (MSPBE) with smooth and nonlinear value functions. However, they adopted twotimescale step-sizes but only obtained a slow convergence performance. Other TD methods with nonlinear function approximations for policy evaluations include (Wang et al., 2017; 2016). Qiu et al. (2020) also investigated nonlinear TD learning and proposed two single-timescale first-order stochastic algorithms. However, the convergence rate of their STSG and VR-STSG are O(1/K1/4) and O(1/K1/3), while our VRPD algorithm achieves a much faster O(1/K) convergence rate. In policy evaluation with non-linear function approximation, the state-of-the-art and the most related work to ours is (Wai et al., 2019), which showed that minimizing the generalized MSPBE problem is equivalent to solving a non-convex-strongly-concave (NCSC) minimax optimization problem via the Fenchel’s duality. However, their best convergence results only hold when the step-size is O( 1M ), where M is the size of the dataset. This is problematic for modern RL problems with a large state-action transition dataset. More importantly, although their convergence theorem appears to have a 1K factor (K being the total number of iterations), their convergence rate bound is in the form of F (K)+Constant1 K·Constant2 (cf. Theorem 1, Eq. (26) in Wai et al. (2019)). Notably, the F
(K) term in the denominator in Eq. (26) inherently depends on the primal and dual values θ(K) and ω(K) in the K-th iteration, respectively. It is unclear whether ω(K) can be bounded in (Wai et al., 2019), hence leading to an ambiguity in guaranteeing an O(1/K) convergence rate. Thus, whether an O(1/K) convergence rate is achievable in single-timescale policy evaluation with nonlinear function approximation and constant step-sizes remains an open question thus far. The key contribution and novelty in this paper is that we resolve the above open question by proposing two new algorithms, both achieving anO(1/K) convergence rate. To establish this result, we propose a new convergence metric (cf. Eq. (9) in Section 4.1), which necessitates new proof techniques and analysis. For easy comparisons, we summarize our algorithms and the related works in Table 1.
2) Relations with NCSC Minimax Optimization: Although the focus of our paper is on RL policy evaluation, our algorithmic techniques are also related to the area of NCSC minimax optimization due to the primal-dual MSPBE formulation (cf. Eq. (2) in Section 3). Early attempts in (Nouiehed et al., 2019; Lin et al., 2020b) developed gradient descent-ascent algorithms to solve the NCSC minimax problems. However, these methods suffer from a high sample complexity and slow convergence rate. To overcome this limitation, two variance-reduction algorithms named SREDA (Luo et al., 2020) are proposed for solving NCSC minimax problems, which shares some similarity to our work.
Later, Xu et al. (2020a) enhanced SREDA to allow bigger step-sizes. However, our algorithms still differ from SREDA in the following key aspects: (i) Our algorithms are single-timescale algorithms, which are much easier to implement. In comparison, SREDA is a two-timescale algorithm, where solving an inner concave maximization subproblem is needed. Thus, to a certain extent, SREDA can be viewed as a triple-loop structure, and hence the computational complexity of SREDA is higher than ours; (ii) In the initialization stage, SREDA uses the PiSARAH, which is a subroutine that aims to help the SREDA algorithm achieve the desired accuracy at the initialization step and can be seen as an additional step to solve an inner concave maximization subproblem. Thus, SREDA has a higher computation cost than our paper. (iii) The number of parameters in SREDA are far more than ours and it requires the knowledge of the condition number to set the algorithm’s parameters for good convergence performance. By contrast, our algorithms only require step-sizes α and β to be sufficiently small, which is easier to tune in practice. (iv) SREDA does not provide an explicit convergence rate in their paper (it is unclear what their convergence rate is from their proof either). Yet, we show that our VRPD in theory has a lower sample complexity than that of SREDA.
Another related work in terms of NCSC minimax optimization is (Zhang et al., 2021), which also provided sample complexity upper and lower bounds. However, there remains a gap between the sample complexity lower and upper bounds in (Zhang et al., 2021). By contrast, the sample complexity of our VRPD algorithm matches the lower bound O(M + √ M −2) in (Zhang et al., 2021), which is the first in the literature. Furthermore, the algorithm contains an inner minimax subproblem (cf. Line 6 of Algorithm 1 in Zhang et al. (2021)). Solving such a subproblems in the inner loop incurs high computational costs. Due to this reason, the algorithm in (Zhang et al., 2021) had to settle for an inexact solution, which hurts the convergence performance in practice. In contrast, our algorithm does not have such a limitation.
3 PRELIMINARIES AND PROBLEM STATEMENT
We start from introducing the necessary background of reinforcement learning, with a focus on the policy evaluation problem based on nonlinear function approximation.
1) Policy Evaluation with Nonlinear Approximation: RL problems are formulated using the Markov decision process (MDP) framework defined by a five-tuple {S,A, P, γ,R}, where S denotes the state space and A is the action space; P : S ×A → S represents the transition function, which specifies the probability of one state transitioning to another after taking an action; R denotes the space of the received reward upon taking an action a ∈ A under state s ∈ S (in this paper, we assume that the state and action spaces are finite, but the numbers of states and actions could be large); and γ ∈ [0, 1) is a time-discount factor. For RL problems over an infinite discrete-time horizon {t ∈ N}, the learning agent executes an action at according to the state st and some policy π : S → A. The system then transitions into a new random state st+1 in the next time-slot. Also, the agent receives a random reward Rπ(st, at). The trajectory generated by a policy π is a sequence of state-action pairs denoted as {s1,a1,s2,a2,. . .}. The goal of the agent is to learn an optimal policy π∗ to maximize the long-term discounted total reward. Specifically, for a policy π (could be a randomized policy), the expected reward received by the agent at state s in any given time-slot can be computed as Rπ(st) = Ea∼π(·|s) [ Rπ(st, a) ] . The value
function V π (s0) = E [ ∑∞ t=0 γ
tR (st) | s0, π] indicates the long-term discounted reward of policy π over an infinite horizon with the initial state at s0 ∈ S. Also, the Bellman equation implies that V π(·): V (s)=T πV (s), where T πf(s) , E[Rπ(s) + γf(s′)|a ∼ π(·|s), s′ ∼ P (·|s, a)] denotes the Bellman operator. In RL, the agent’s goal is to determine an optimal policy π∗ that maximizes the value function V π(s) from any initial state s.
However, the first obstacle in solving RL problems stems from evaluating V π(·) for a given π since P (·|s, a) is unknown. Moreover, it is often infeasible to store V π(s) since the state space S could be large. To address these challenges, one popular approach in RL is to approximate V π(·) using a family of parametric and smooth functions in the form of V π(·) ≈ Vθπ (·), where θπ ∈ Rd is a d-dimensional parameter vector. Here, Θ is a compact subspace. For notational simplicity, we will omit all superscripts “π” whenever the policy π is clear from the context. In this paper, we focus on nonlinear function approximation, i.e., Vθ(·) : S → R is a nonlinear function with respect to (w.r.t.) θ. For example, Vθ(·) could be based on a θ-parameterized nonlinear DNN. We assume that the gradient and Hessian of Vθ(·) exist and are denoted as: gθ(s) := ∇θVθ(s) ∈ Rd, Hθ(s) := ∇2θVθ(s) ∈ Rd×d.
Our goal is to find the optimal parameter θ∗ ∈ Rd that minimizes the error between Vθ∗(·) and V (·). This problem can be formulated as minimizing the mean-squared projected Bellman error (MSPBE) of the value function as follows (Liu et al., 2018):
MSPBE(θ) := 1
2 ∥∥Es∼Dπ(·)[(T πVθ(s)−Vθ(s))∇θVθ(s)>]∥∥2D−1 = max ω∈Rd ( − 1 2 Es∼Dπ(·)[(ω>gθ(s))2] + 〈ω,Es∼Dπ(·) [ (T πVθ(s)−Vθ(s))gθ(s) ] 〉 ) , (1)
where Dπ(·) is the stationary distribution of under policy π andD = Es∼Dπ [gθ(s)g>θ (s)] ∈ Rd×d. 2) Primal-Dual Optimization for MSPBE: It is shown in (Liu et al., 2018) (cf. Proposition 1) that minimizing MSPBE(θ) in (1) is equivalent to solving a primal-dual minimax optimization problem:
min θ∈Rd max ω∈Rd L(θ,ω), (2)
where L(θ,ω) , 〈ω,Es∼Dπ(·) [ (T πVθ(s)−Vθ(s))gθ(s)> ] 〉− 12Es∼Dπ(·)[(ω >gθ(s)) 2]. Since the distribution Dπ(·) is unknown and the expectation cannot be evaluated directly, one often considers the following empirical minimax problem by replacing the expectation in L(θ,ω) with a finite sample average approximation in the stochastic objective function based on an M -step trajectory, i.e., minθ∈Rd maxω∈Rd L(θ,ω)=minθ∈Rd maxω∈Rd 1M ∑M i=1 Li(θ,ω), where
Li(θ,ω) := 〈ω,[R(si, ai, si+1) + γVθ(si+1)− Vθ(si)]× gθ(si)〉 − 1
2 (ω>gθ(si)) 2. (3)
Solving the above empirical minimax problem for MSPBE constitutes the rest of this paper.
4 SOLUTION APPROACH
As mentioned in Section 3, based on an M -step trajectory {s1, a1 · · · , sM , aM , sM+1} generated by some policy π, our goal is to solve the empirical primal-dual and finite-sum optimization problem:
min θ∈Rd max ω∈W
1
M M∑ i=1 Li(θ,ω) = min θ∈Rd max ω∈W L(θ,ω), (4)
where W is assumed to be a convex constrained set (Problem (4) becomes Problem (2) when W = Rd). In our Appendix, we also discussed the min-max problem while θ ∈ Θ. Θ is a convex constrained set. See details in Appendix. 12. Note that Problem (4) could be non-convex (e.g., DNN-based nonlinear approximation). Let J(θ) , maxω∈W L(θ,ω). Then, we can equivalently rewrite Problem (4) as follows: minθ∈Rd maxω∈W L(θ,ω) = minθ∈Rd J(θ). Note from (3) that L(θ,ω) is strongly concave w.r.t. ω, which guarantees the existence and uniqueness of the solution to the problem maxω∈W L(θ,ω),∀θ ∈ Rd. Then, given θ ∈ Rd, we define the following notation: ω∗(θ) := argmax
ω∈W L(θ,ω). Thus, J(θ) can be further written as:
J(θ) = L(θ,ω∗) = max ω∈W L(θ,ω). (5)
The function J(θ) can be viewed as a finite empirical version of MSPBE. We aim to minimize J(θ) by finding the stationary point of L(θ,ω). To simplify the notaion, we use ω∗ to denote ω∗(θ). Note that ifD in Eq. (1) is positive definite, Problem (4) is strongly concave in ω, but non-convex in θ in general due to the non-convexity of function Vθ . Thus, the stated primal-dual objective function is a NCSC optimization problem. In this paper, we make the following assumptions: Assumption 1 (µ-Strongly Concavity). The differentiable function L(θ,ω) is µ-strongly concave in ω: if L(θ,ω) ≤ L(θ,ω′) +∇ωL(θ,ω′)>(ω − ω′)− µ2 ‖ω − ω
′‖2, ∀ω,ω′ ∈ Rd, µ > 0 and any fixed θ ∈ Rd. The above mentioned condition is equivalent to : ‖∇ωL(θ,ω) −∇ωL(θ,ω′)‖ ≥ µ‖ω − ω′‖, ∀ω,ω′ ∈ Rd. Similar proofs can be found in Lemma 2 and 3 in Zhou (2018). Assumption 2 (Lf -Smoothness). For i = 1, 2, . . . ,M , both gradient ∇θLi(θ,ω) and ∇ωLi(θ,ω) are Lf -smooth. That is, for all θ,θ′ ∈ Rd and ω,ω′ ∈ Rd, there exists a constant Lf > 0 such that ‖∇Li(θ,ω)−∇Li(θ′,ω′)‖ ≤ Lf ( ‖θ − θ′‖+ ‖ω − ω′‖ ) .
Algorithm 1 The Variance-Reduced Primal-Dual Stochastic Gradient Method (VRPD). Input: An M -step trajectory of the state-action pairs {s1, a1, s2, a2, · · · , sM , aM , sM+1} generated
from a given policy; step sizes α, β ≥ 0; initialization points θ0 ∈ Rd, ω0 ∈ W . Output: (θ(K̃),ω(K̃)), where K̃ is independently and uniformly picked from {1, · · · ,K};
1: for k = 0, 1, 2, · · · ,K − 1 do 2: If mod(k, q) = 0, compute full gradients G(k)θ , G (k) ω as in Eq. (6). 3: Otherwise, select S samples independently and uniformly from [M ], and compute gradients as in Eq. (7). 4: Perform the primal-dual updates to obtain the next iterate θ(k+1),ω(k+1) as in Eq. (8). 5: end for
Assumption 3 (Bounded Variance). There exists a constant σ > 0 such that for all θ ∈ Rd,ω ∈ Rd, 1 M ∑M i=1 ‖∇θLi(θ,ω)−∇θL(θ,ω)‖2 ≤ σ2 and 1 M ∑M i=1 ‖∇ωLi(θ,ω)−∇ωL(θ,ω)‖2 ≤ σ2.
In the above assumptions, Assumption 1 is satisfied if the number of samples M is sufficiently large and coupling with the fact that the matrixD is positive definite. To see that, note that µ=λmin (D) > 0, where D = Es [ ∇θVθ(s)∇θVθ(s)> ] ∈ Rd×d and D tends to be full-rank as M increases. Thus, as soon as we find a µ > 0 when M is sufficiently large, this µ is independent of M as M continues to increase. Assumption 2 is standard in the optimization literature. Assumption 3 is also commonly adopted for proving convergence results of SGD- and VR-based algorithms, or algorithms that draw a mini-batch of samples instead of all samples. Assumption 3 is guaranteed to hold under the compact set condition and common for stochastic approximation algorithms for minimax optimization (Qiu et al., 2020; Lin et al., 2020a). Assumptions 1–3 are also general assumptions often used in temporal difference (TD) problems (see, e.g., (Qiu et al., 2020; Wai et al., 2019)). With these assumptions, we are now in a position to present our algorithms and their convergence performance results.
4.1 THE VARIANCE-REDUCED PRIMAL-DUAL METHOD
In this section, we first present the variance-reduced primal-dual (VRPD) algorithm for solving policy evaluation problems, followed by the theoretical convergence results. Due to space limitation, we provide a proof sketch in the main text and relegate the proof to the supplementary material.
1) Algorithm Description: The full description of VRPD is illustrated in Algorithm 1. In VRPD, for every q iterations, the algorithm calculates the full gradients as follows:
G (k) θ =
1 |M | ∑ i∈M ∇θLi(θ(k),ω(k)); G(k)ω = 1 |M | ∑ i∈M ∇ωLi(θ(k),ω(k)). (6)
In all other iterations, VRPD selects a batch of samples S and computes variance-reduced gradient estimators as:
G (k) θ = 1 |S| ∑ i∈S ( ∇θLi(θ(k),ω(k))−∇θLi(θ(k−1),ω(k−1)) +G(k−1)θ ) ; (7a)
G(k)ω = 1 |S| ∑ i∈S ( ∇ωLi(θ(k),ω(k))−∇ωLi(θ(k−1),ω(k−1)) +G(k−1)ω ) . (7b)
The estimators in (7) are constructed iteratively based on the previous update information ∇θLi(θ(k−1),ω(k−1)) (resp. (∇ωLi(θ(k−1),ω(k−1)) ) and G(k−1)θ (resp. G (k−1) ω ). VRPD updates the primal and dual variables as follows:
θ(k+1) = θ(k) − βG(k)θ ; (8a)
ω(k+1) = PW(ω(k) + αG(k)ω ) = argmin ω̃∈Ω ‖ω̃ − (ω(k) + αG(k)ω )‖2, (8b)
where the parameters α and β are constant learning rates for primal and dual updates, respectively.
2) Convergence Performance: In this paper, we propose a new metric for convergence analysis:
M(k) := ‖∇J(θ(k)) ‖2 + 2‖ω(k) − ω∗(θ(k))‖2. (9) The first term in (9) measures the convergence of the primal variable θ. As common in nonconvex optimization analysis, ‖∇J(θ)‖2 = 0 indicates that θ is a first-order stationary point
(FOSP) of Problem (4). The second term in (9) measures the convergence of ω(k) to the unique maximizer ω(k)∗ for L(θk, ·). Note that if Problem (4) is unconstrained in dual (i.e., ω ∈ Rd), it follows from Assumption 2 and ‖∇ωL(θ(k),ω∗(θ(k)))‖2 = 0 that M(k) ≥ ‖∇J(θ(k)) ‖2 + 2 L2f ‖∇ωL(θ(k),ω(k))‖2. We now introduce the notion of the approximate first-order stationary points. We say that point {θ,ω} is an -stationary point of function L(θ,ω) if M ≤ is satisfied. Remark. Several important remarks on the connections between our metric M(k) and the conventional convergence metrics in the literature are in order. A conventional convergence metric in the literature for NCSC minimax optimization is ‖∇J(θ(k)) ‖2 (Lin et al., 2020a; Luo et al., 2020; Zhang et al., 2021), which is the first term of M(k) and measures the convergence of the primal variable θ under a given dual variable ω. This is because ‖∇J(θ)‖2 = 0 implies that θ is a FOSP. The novelty in our convergence metric is the second term in M(k), which measures the convergence of ωk to the unique maximizer ω∗k for L(θk, ·). Another conventional convergence metric in the literature of minimizing the empirical MSPBE problem is ‖∇θL(θ,ω)‖2 + ‖∇ωL(θ,ω)‖2 (Tsitsiklis & Van Roy, 1997). Since the nonconvexstrong-concave minimax optimization problem is unconstrained in dual (i.e., ω ∈ Rd), it follows from Lipschitz-smoothness in Assumption 2 and ‖∇ωL(θ(k),ω∗(θ(k)))‖2 = 0 that ‖ω(k)−ω∗(θ(k))‖2 ≥ 1 L2f ‖∇ωL(θ(k),ω(k))‖2. Therefore, the second term in our M(k) (2‖ω(k)−ω∗(θ(k))‖2) is an upper bound of the second term in this conventional metric (‖∇ωL(θ,ω)‖2). Thus, 2‖ω(k) − ω∗(θ(k))‖2 is a stronger metric than ‖∇ωL(θ,ω)‖2 in the sense that an O(1/K) convergence rate under M(k) implies an O(1/K) convergence rate of the conventional metric, but the converse is not true. Moreover, the benefit of using 2‖ω(k) − ω∗(θ(k))‖2 in our M(k) is that its special structure allows us to prove the O(1/K) convergence, while the second term in the conventional metric fails.
With our proposed convergence metric in (9), we have the following convergence result:
Theorem 1. Under Assumptions 1–3, choose step-sizes: α ≤ min{ 14Lf , 2µ 34L2f+2µ 2 } and β ≤ min {
1 4Lf , 1 2(Lf+L2f/µ) , µ 8 √ 17L2f , µ
2α
8 √
34L2f
} . Let q= √ M and S= √ M , it holds that:
1
K K−1∑ k=0 E[M(k)] ≤ 1 K min{1, L2f} [ 16L2f αµ C2 + 2 β C1 ] = O ( 1 K ) ,
where C1 , E[J(θ(0))]− E[J(θ(∗))] and C2 , ( E‖ω∗(θ(0))− ω(0)‖2 ) . Corollary 2. The overall stochastic sample complexity isO( √ Mκ3 −1 +M). Note that κ = Lf/µ denotes the condition number.
Remark. Theorem 1 states that VRPD achieves an O(1/K) convergence rate to an -FOSP. The most challenging part in proving Theorem 1 stems from the fact that one needs to simultaneously evaluate the progresses of the gradient descent in the primal domain and the gradient ascent in the dual domain of the minimax problem.
Toward this end, the nPD-VR method in (Wai et al., 2019) employs ‖∇ωL(θ(k),ω(k))‖2 in their metric to evaluate convergence. However, this approach yields a term F (K) , E[L(θ(0),ω(0)) − L(θ(K),ω(K))] in their convergence upper bound in the form of O(F (K)/K) (cf. Theorem 1, Eq. (26) in (Wai et al., 2019)). Since F (K) depends on K, it is unclear whether or not the nPD-VR method in (Wai et al., 2019) can achieve an O(1/K) convergence rate. This unsatisfactory result motivates us to propose a new metric M(k) in Eq. (9) to evaluate the convergence of our VRPD algorithm. The first part of our convergence metric ‖∇J(θ(k)) ‖2 measures the stationarity gap of the primal variable, while the second part 2‖ω(k) − ω∗(θ(k))‖2 measures the dual optimality gap. Consequently, we bound per-iteration change in J(θ) instead of the function L(θ(k),ω(k)). This helps us avoid the technical limitations of (Wai et al., 2019) and successfully establish the O(1/K) convergence rate, hence resolving an open problem in this area. Remark. VRPD adopts a large O(1) (i.e., constant) step-size compared to the O(1/M) step-size of nPD-VR (Wai et al., 2019), where M is the dataset size. This also induces a faster convergence. Also, VRPD’s estimator uses fresher information from the previous iteration, while VR-STSG (Qiu
Algorithm 2 Adaptive-batch VRPD method (VRPD+). Input: A trajectory of the state-action pairs {s1, a1, s2, a2, · · · , sM , aM , sM+1} generated from a
given policy; step sizes α, β ≥ 0; initialization points θ0 ∈ Θ, ω0 ∈ Rd. Output: (θ(K̃),ω(K̃)), where K̃ is independently and uniformly picked from {1, · · · ,K};
1: for k = 0, 1, 2, · · · ,K − 1 do 2: If mod(k, q) = 0, select Ns indices independently and uniformly from [M ] as in Eq. (10)
and calculate stochastic gradients as in Eq. (11); 3: Otherwise, select S independently and uniformly from [M ]; Compute gradients as in Eq. (7); 4: Perform the primal-dual updates as in Eq. (8). 5: end for
et al., 2020) and nPD-VR (Wai et al., 2019) only use the information from the beginning of q-sized windows. Collectively, VRPD makes a considerably larger progress than state-of-the-art algorithms (Qiu et al., 2020; Wai et al., 2019).
4.2 THE ADAPTIVE-BATCH VRPD METHOD (VRPD+)
Note that VRPD still requires full gradients every q iterations, which may entail a high sample complexity. Upon closer observations, we note that accurate gradient estimation plays an important role only in the later stage of the convergence process. This motivates us to further lower the sample complexity of VRPD by using adaptive batch sizes. Toward this end, we propose an adaptive-batch VRPD method (VRPD+) to lower the sample complexity of the VRPD algorithm in Algorithm 1.
1) Algorithm Description: The full description of VRPD+ is illustrated in Algorithm 2. In VRPD+, our key idea is to use the gradients calculated in the previous loop to adjust the batch size Ns of the next loop. Specifically, VRPD+ chooses Ns in the k-th iteration as:
Ns = min{cγσ2(γ(k))−1, c σ2 −1,M}, (10) where cγ , c > c for certain constant c, M denotes the size of the dataset and σ2 is the variance bound,
and γ(k+1) = ∑k i=(nk−1)q ‖G(i)θ ‖ 2
q is the stochastic gradients calculated in the previous iterations. In VRPD+, for every q iterations, we select Ns samples independently and uniformly from [M ] and compute gradient estimators as follows:
G (k) θ =
1 |Ns| ∑ i∈Ns ∇θLi(θ(k),ω(k)); G(k)ω = 1 |Ns| ∑ i∈Ns ∇θLi(θ(k),ω(k)). (11)
For other iterations, VRPD+ is exactly the same as VRPD. Next, we will theoretically show that such an adaptive batch-size scheme still retains the same convergence rate, while achieving an improved sample complexity. 2) Convergence Performance: For VRPD+, we have the following convergence performance result: Theorem 3. Under Assumptions 1–3, choose step-sizes: α ≤ min{ 14Lf , 2µ 34L2f+2µ 2 } and β ≤
min {
1 4Lf , 1 2(Lf+L2f/µ) , µ 8 √ 17L2f , µ
2α
8 √
34L2f
} . Let q = √ M ,S = √ M and cγ ≥ (288L2f/µ2 + 8) in
VRPD+, where cγ ≥ c for some constant c > 4K + 68Kβµ2 . With constants C1 , E[J(θ (0))] − E[J(θ(∗))] and C2 , ( E[‖ω∗(θ(0))− ω(0)‖2 ] ), it holds that:
1
K K−1∑ k=0 E[M(k)] ≤ 1 Kmin{1, L2f} [ K · 2 + 16L2f αµ C2 + 2 β C1 ] = O ( 1 K ) + 2 .
Corollary 4. The overall stochastic sample complexity is O( √ Mκ3 −1 +M). κ = Lf/µ denotes the condition number.
Remark. From Theorem 3, it can be seen that VRPD+ achieves the same convergence rate as that of VRPD. Since we choose the subsample set Ns instead of full gradient calculation in VRPD+, it achieves a much lower sample complexity compared to VRPD. Additionally, the convergence performance of VRPD+ is affected by the constant K 2 , which is due to the use of the adaptive batch size in each outer-loop of VRPD+. Also, it can be observed that the algorithm convergence rate is affected by the carefully chosen step-sizes α and β, because either a too small or too large step-size may have negative impact on the convergence of the algorithm.
Figure 2: Cartpole-v0 environment.
0 2 10 3 4 10 3 6 10 3 8 10 3
# of grad
10 -4
10 -2
10 0
10 2
L (
, )
VRPD+
VRPD
0 2 10 3 4 10 3 6 10 3 8 10 3
# of grad
10 -4
10 -2
10 0
10 2
|| J (
)| |2
+ ||
L (
, )| |2 VRPD+
VRPD
Figure 3: MountainCar-v0 environment. 0 500 1000 1500 2000 # of grad
-1.4
-1.2
-1
-0.8
-0.6
-0.4
-0.2
0
L (
, ) VRPD+
VRPD
0 500 1000 1500 2000
# of grad
0
0.5
1
1.5
2
|| J (
)| |2
+ ||
L (
, )|
|2
VRPD+
VRPD
Figure 4: Cartpole-v0 environment.
Remark. The proof of Theorem 3 follows from a similar approach to the proof of Theorem 1. The key difference and most challenging part of proving Theorem 3 stem from the relaxation on ‖∇θL(θ(k),ω(k)) −G(k)θ ‖2 and ‖∇ωL(θ(k),ω(k)) −G (k) ω ‖2. Thanks to the bounded variance in Assumption 3 and the selectedNs in Eq. (10), we are able to derive outer-loop bounds for primal and dual gaps, respectively. We refer readers to the Appendix for the details of the complete proof.
5 EXPERIMENTAL RESULTS
In this section, we conduct numerical experiments to verify our theoretical results. We compare our work with the basic stochastic gradient (SG) method (Lin et al., 2020b) and three state-of-the-art algorithms for PE: nPD-VR (Wai et al., 2019), STSG (Qiu et al., 2020) and VR-STSG (Qiu et al., 2020). Due to space limitation, we provide our detailed experiment settings in the Appendix.
200 600 1000 Trajectory length
0.075 0.100 0.125 0.150 0.175 M SE
Mountaincar-v0 Linear Nonlinear
200 600 1000 Trajectory length
0.10
0.15
0.20
0.25
M SE
Cartpole-v0 Linear Nonlinear
Figure 5: MSE comparison with 10 trials.
Numerical Results: We set the constant learning rates α = 10−3, β = 10−1, mini-batch size q = d √ Me, constant c = 32 and solution accuracy
= 10−3. First, we compare the loss value and gradient norm performance based on MountainCar-v0 and Cartpole-v0 with nPD-VR, SG, STSG, and VR-STSG in Figs. 1 and 2. We set the constraintW = [0, 10]n and initialize all algorithms at the same point, which is generated randomly from the normal distribution. We can see that VR-STSG and nPD-VR slowly converge after 40 epochs, while STSG and SG fail to converge. VRPD converges faster than all the other algorithms with the same step-size values. As for Cartpole-v0, we clearly see a trend of approaching zero-loss with VRPD. These results are consistent with our theoretical result that one can use a relatively large step-size with VRPD, which leads to faster convergence. Also, we compare the sample complexity of VRPD and VRPD+ in MountainCar-v0 and Cartpole-v0, and the results are shown in in Figs. 3 and 4, respectively. We can see
that VRPD+ converges to the same level with much fewer samples than VRPD does. Next, we compared the mean squared error(MSE) between the ground truth value function and the estimated value function over 10 independent runs with linear approximation and nonlinear approximation. In Fig. 5, with the same amount of parameter size, nonlinear approximation always achieves smaller MSE than linear approximation (Du et al., 2017). Further experiments on the performance of J(θ) are shown in the supplementary material.
6 CONCLUSION
In this paper, we proposed and analyzed two algorithms called VRPD and VRPD+ for policy evaluation with nonlinear approximation. The VRPD algorithm is based on a simple single-timescale framework by utilizing variance reduction techniques. The VRPD algorithm allows the use of constant step-sizes and achieves an O(1/K) convergence rate. The VRPD+ algorithm improves VRPD by further applying an adaptive batch size based on historical stochastic gradient information. Our experimental results also confirmed our theoretical findings in convergence and sample complexity. | 1. What is the focus and contribution of the paper regarding nonconvex strongly concave min-max problems?
2. What are the strengths and weaknesses of the proposed variance-reduced algorithms?
3. Do you have any concerns or questions about the role of \Theta in the paper, the change in the maximization variable, and the compactness of \mathcal{W}?
4. Why does the paper claim it's a single timescale algorithm when there are two timescales in the algorithm?
5. How does the reviewer assess the sample complexity in Theorem 1, and why do they believe it should be a larger power of \kappa?
6. What is the significance of the novel result over the state-of-the-art in optimization literature, particularly for nonconvex-concave min-max problems?
7. Can you provide more information on the literature review of VR algorithms for smooth nonconvex and nonconvex-concave min-max problems? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes two variance-reduced (VR) algorithms for a specific nonconvex strongly concave min-max problem obtained after reformulating the policy evaluation problem as a min-max game with nonlinear function approximation for the value function of the policy. They provide the first O(1/K) convergence and the possibility of removing the full gradient computation required in VR methods in certain iterations.
Strengths And Weaknesses
The convergence results are provided for the first time and they look theoretically sound to me.
Weakness:
I am not sure about the role of \Theta in the paper. It seems that it is mentioned in the last paragraph on page 4 and then dropped subsequently in the min-max formulation (2) or (4). Would the analysis work if such \Theta is explicitly imposed on the min-max problem? Note that in such a case, the |J(\theta)| may not converge to zero.
The maximization over \omega \in R^d of equation (2) changes to \omega \in \mathcal{W} in equation (4). Does that change the policy evaluation problem? There is no justification provided for this abrupt change. Is compactness of \mathcal{W} necessary for convergence?
Why does the paper say it is a single timescale algorithm? I see two timescales \alpha and \beta in the algorithm. Are they set the same in the implementation?
I don't see how the sample complexity is O(\kappa/\eps) in Theorem 1. Based on the step-size policy, I believe it should be a larger power of \kappa. E.g., \kappa^3/\eps.
Since the main technical thrust of the paper is VR algorithms for the min-max problem, I was expecting some literature review of such algorithms, at least for nonconvex settings. It is important to put this result in the context of such literature and show the novelty of their work over the state-of-the-art in the optimization literature. I recommend that the authors include papers/articles working on VR algorithms for smooth nonconvex (note that J(\theta) is a smooth nonconvex function) and nonconvex-concave minx-max problems.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written and easy to follow. The proofs seem clear to me. The only comment I have is to define \epsilon_\theta in the proof of Theorem 1.
In terms of quality and novelty, the algorithm design is not new. A similar VR gradient design was used in Nonconvex smooth optimization in the past. Applying this to the extended nonconvex strongly concave minmax problem is new in the literature if I am not missing something. Although I am not updated with the state-of-the-art in this specific area. |
ICLR | Title
On $\mathcal{O}(1/K)$ Convergence and Low Sample Complexity for Single-Timescale Policy Evaluation with Nonlinear Function Approximation
Abstract
Learning an accurate value function for a given policy is a critical step in solving reinforcement learning (RL) problems. So far, however, the convergence speed and sample complexity performances of most existing policy evaluation algorithms remain unsatisfactory, particularly with non-linear function approximation. This challenge motivates us to develop a new variance-reduced primal-dual method (VRPD) that is able to achieve a fast convergence speed for RL policy evaluation with nonlinear function approximation. To lower the high sample complexity limitation of variance-reduced approaches (due to the periodic full gradient evaluation with all training data), we further propose an enhanced VRPD method with an adaptive-batch adjustment (VRPD). The main features of VRPD include: i) VRPD allows the use of constant step sizes and achieves the O(1/K) convergence rate to the first-order stationary points of non-convex policy evaluation problems; ii) VRPD is a generic single-timescale algorithm that is also applicable for solving a large class of non-convex strongly-concave minimax optimization problems; iii) By adaptively adjusting the batch size via historical stochastic gradient information, VRPD is more sample-efficient empirically without loss of theoretical convergence rate. Our extensive numerical experiments verify our theoretical findings and showcase the high efficiency of the proposed VRPD and VRPD algorithms compared with the state-of-the-art methods.
1 INTRODUCTION
In recent years, advances in reinforcement learning (RL) have achieved enormous successes in a large number of areas, including healthcare (Petersen et al., 2019; Raghu et al., 2017b), financial recommendation (Theocharous et al., 2015), resources management (Mao et al., 2016; Tesauro et al., 2006) and robotics (Kober et al., 2013; Levine et al., 2016; Raghu et al., 2017a), to name just a few. In RL applications, an agent interacts with an environment and repeats the tasks of observing the current state, performing a policy-based action, receiving a reward, and transition to the next state. A key step in many RL algorithms is the policy evaluation (PE) problem, which aims to learn the value function that estimates the expected long-term accumulative reward for a given policy. Value functions not only explicitly provide the agent’s accumulative rewards, but could also be utilized to update the current policy so that the agent can visit valuable states more frequently (Bertsekas & Tsitsiklis, 1995; Lagoudakis & Parr, 2003). In RL policy evaluation, two of the most important performance metrics are convergence rate and sample complexity. First, since policy evaluation is a subroutine of an overall RL task, developing fast-converging policy evaluation algorithms is of critical importance to the overall efficiency of RL. Second, due to the challenges in collecting a large number of training samples (trajectories of state-action pairs) for policy evaluations in RL, reducing the number of samples (i.e., sample complexity) can significantly alleviate the burden of data collection for solving policy evaluation problems. These two important aspects motivate us to pursue a fast-converging policy evaluation algorithm with a low sample-complexity in this paper.
Among various algorithms for policy evaluation, one of the simplest and most effective methods is the temporal difference (TD) learning approach (Sutton, 1988). Instead of focusing on the predicted and actual outcomes, the key idea of the TD learning is to make the difference between temporally successive predictions small. Specifically, the TD learning approach learns the value function by
using the Bellman equation to bootstrap from the current estimated value function. To date, there have been many algorithms proposed within the family of TD learning (Dann et al., 2014). However, most of these methods suffer from either a unstable convergence performance, (e.g., TD(λ) (Sutton, 1988) for off-policy training) or a high computational complexity (e.g., least-squares temporal difference (LSTD) (Boyan, 2002; Bradtke & Barto, 1996)) in training with massive features. The limitation of these early attempts is largely due to the fact that they do not leverage the gradient-oracle in policy evaluation. Thus, in recent years, gradient-based policy evaluation algorithms have become increasingly prevalent. However, the design of efficient gradient-based policy evaluation algorithm is a non-trivial task. On one hand, as an RL task becomes more sophisticated, it is more appropriate to utilize nonlinear function approximation (e.g., deep neural network (DNN)) to model the value function. However, when working with nonlinear DNN models, the convergence performance of the conventional single-timescale TD algorithms may not be guaranteed (Tsitsiklis & Van Roy, 1997). To address this issue, some convergent two-timescale algorithms (Bhatnagar et al., 2009; Chung et al., 2018) have been proposed at the expense of higher implementation complexity. On the other hand, modern policy evaluation tasks could involve a large amount of state transition data. To perform policy evaluation, algorithms typically need to calculate full gradients that require all training data (e.g., gradient temporal difference (GTD) (Sutton et al., 2008) and TD with gradient correction (TDC) (Sutton et al., 2009b)), which entails a high sample complexity. So far, existing works on PE are either focus on linear approximation (GTD2 (Sutton et al., 2009b), PDBG (Du et al., 2017), SVRG (Du et al., 2017), SAGA (Du et al., 2017)) or have such a slower convergence performance (STSG (Qiu et al., 2020), VR-STSG (Qiu et al., 2020), nPD-VR (Wai et al., 2019)) (see detailed discussions in Section. 2). In light of the above limitations, in this paper, we ask the following question: Could we develop an efficient single-timescale gradient-based algorithm for policy evaluation based on nonlinear function approximation?
In this paper, we give an affirmative answer to the above question. Specifically, we propose an efficient gradient-based variance-reduced primal-dual algorithm (VRPD) to tackle the policy evaluation problem with nonlinear function approximation, which we recast as a minimax optimization problem. Our VRPD algorithm admits a simple and elegant single-timescale algorithmic structure. Then, we further enhance VRPD by proposing VRPD+, which uses adaptive batch sizes to relax the periodic full gradient evaluation to further reduce sample complexity. The main contribution of this paper is that our proposed algorithms achieve an O(1/K) convergence rate (K is the number of iterations) with constant step-sizes for policy evaluation with nonlinear function approximation, which is the best-known result in the literature thus far. Our main results are highlighted as follows:
• By utilizing a variance reduction technique, our VRPD algorithm allows constant step-sizes and enjoys a low sample complexity. We show that, under mild assumptions and appropriate parameter choices, VRPD achieves an O(1/K) convergence rate to the first-order stationary point of a class of nonconvex-strongly-concave(NCSC) minimax problems, which is the best-known result in the literature. To achieve this result, our convergence rate analysis introduces new proof techniques and resolves an open question and clarifies an ambiguity in the state-of-the-art convergence analysis of VR-based policy evaluation methods (see 2nd paragraph in Section 2.1 for more discussions).
• VRPD+ significantly improves the sample complexity of the VRPD algorithm for policy evaluation with massive datasets. Our VRPD+ (adaptive-batch VRPD) algorithm incorporates historical information along the optimization path, but does not involve backtracking and condition verification. We show that our VRPD+ algorithm significantly reduces the number of samples and the computation loads of gradients, thanks to our proposed adaptive batch size technique that is able to avoid full gradient evaluation.
• Our extensive experimental results also confirm that our algorithms outperform the state-of-theart gradient-based policy evaluation algorithms, and our VRPD+ can further reduce the sample complexity compared to the VRPD algorithm. It is worth noting that, although the focus of our work is on RL policy evaluation, our algorithmic design and proof techniques contribute to the area of minimax optimization and could be of independent theoretical interest.
2 RELATED WORK
1) TD Learning with Function Approximation for Policy Evaluation: TD learning with function approximation plays a vital role in policy evaluation for RL. The key idea of TD learning is to
minimize the Bellman error for approximating the value function. So far, most existing TD learning algorithms with theoretical guarantees focus on the linear setting (e.g., (Sutton et al., 2009a; Srikant & Ying, 2019; Xu et al., 2020b; Stankovic & Stankovic, 2016; Touati et al., 2018)). Doan et al. (2019), Liu et al. (2015), Macua et al. (2014), and Zhang & Xiao (2019) provided a finite-time analysis for the proposed distributed TD(0) and showed that the convergence rate of their algorithm is O(1/K). It was shown in Du et al. (2017) that policy evaluation with linear function approximation by TD(0) can be formulated as a strongly convex-concave or convex-concave problem, and can be solved by a primal-dual method with a linear convergence rate. However, the linearity assumption cannot be applied in a wide range of policy evaluations with nonlinear models. TD learning with nonlinear (smooth) function approximation is far more complex. Maei et al. (2009) was among the first to propose a general framework for minimizing the generalized mean-squared projected Bellman error (MSPBE) with smooth and nonlinear value functions. However, they adopted twotimescale step-sizes but only obtained a slow convergence performance. Other TD methods with nonlinear function approximations for policy evaluations include (Wang et al., 2017; 2016). Qiu et al. (2020) also investigated nonlinear TD learning and proposed two single-timescale first-order stochastic algorithms. However, the convergence rate of their STSG and VR-STSG are O(1/K1/4) and O(1/K1/3), while our VRPD algorithm achieves a much faster O(1/K) convergence rate. In policy evaluation with non-linear function approximation, the state-of-the-art and the most related work to ours is (Wai et al., 2019), which showed that minimizing the generalized MSPBE problem is equivalent to solving a non-convex-strongly-concave (NCSC) minimax optimization problem via the Fenchel’s duality. However, their best convergence results only hold when the step-size is O( 1M ), where M is the size of the dataset. This is problematic for modern RL problems with a large state-action transition dataset. More importantly, although their convergence theorem appears to have a 1K factor (K being the total number of iterations), their convergence rate bound is in the form of F (K)+Constant1 K·Constant2 (cf. Theorem 1, Eq. (26) in Wai et al. (2019)). Notably, the F
(K) term in the denominator in Eq. (26) inherently depends on the primal and dual values θ(K) and ω(K) in the K-th iteration, respectively. It is unclear whether ω(K) can be bounded in (Wai et al., 2019), hence leading to an ambiguity in guaranteeing an O(1/K) convergence rate. Thus, whether an O(1/K) convergence rate is achievable in single-timescale policy evaluation with nonlinear function approximation and constant step-sizes remains an open question thus far. The key contribution and novelty in this paper is that we resolve the above open question by proposing two new algorithms, both achieving anO(1/K) convergence rate. To establish this result, we propose a new convergence metric (cf. Eq. (9) in Section 4.1), which necessitates new proof techniques and analysis. For easy comparisons, we summarize our algorithms and the related works in Table 1.
2) Relations with NCSC Minimax Optimization: Although the focus of our paper is on RL policy evaluation, our algorithmic techniques are also related to the area of NCSC minimax optimization due to the primal-dual MSPBE formulation (cf. Eq. (2) in Section 3). Early attempts in (Nouiehed et al., 2019; Lin et al., 2020b) developed gradient descent-ascent algorithms to solve the NCSC minimax problems. However, these methods suffer from a high sample complexity and slow convergence rate. To overcome this limitation, two variance-reduction algorithms named SREDA (Luo et al., 2020) are proposed for solving NCSC minimax problems, which shares some similarity to our work.
Later, Xu et al. (2020a) enhanced SREDA to allow bigger step-sizes. However, our algorithms still differ from SREDA in the following key aspects: (i) Our algorithms are single-timescale algorithms, which are much easier to implement. In comparison, SREDA is a two-timescale algorithm, where solving an inner concave maximization subproblem is needed. Thus, to a certain extent, SREDA can be viewed as a triple-loop structure, and hence the computational complexity of SREDA is higher than ours; (ii) In the initialization stage, SREDA uses the PiSARAH, which is a subroutine that aims to help the SREDA algorithm achieve the desired accuracy at the initialization step and can be seen as an additional step to solve an inner concave maximization subproblem. Thus, SREDA has a higher computation cost than our paper. (iii) The number of parameters in SREDA are far more than ours and it requires the knowledge of the condition number to set the algorithm’s parameters for good convergence performance. By contrast, our algorithms only require step-sizes α and β to be sufficiently small, which is easier to tune in practice. (iv) SREDA does not provide an explicit convergence rate in their paper (it is unclear what their convergence rate is from their proof either). Yet, we show that our VRPD in theory has a lower sample complexity than that of SREDA.
Another related work in terms of NCSC minimax optimization is (Zhang et al., 2021), which also provided sample complexity upper and lower bounds. However, there remains a gap between the sample complexity lower and upper bounds in (Zhang et al., 2021). By contrast, the sample complexity of our VRPD algorithm matches the lower bound O(M + √ M −2) in (Zhang et al., 2021), which is the first in the literature. Furthermore, the algorithm contains an inner minimax subproblem (cf. Line 6 of Algorithm 1 in Zhang et al. (2021)). Solving such a subproblems in the inner loop incurs high computational costs. Due to this reason, the algorithm in (Zhang et al., 2021) had to settle for an inexact solution, which hurts the convergence performance in practice. In contrast, our algorithm does not have such a limitation.
3 PRELIMINARIES AND PROBLEM STATEMENT
We start from introducing the necessary background of reinforcement learning, with a focus on the policy evaluation problem based on nonlinear function approximation.
1) Policy Evaluation with Nonlinear Approximation: RL problems are formulated using the Markov decision process (MDP) framework defined by a five-tuple {S,A, P, γ,R}, where S denotes the state space and A is the action space; P : S ×A → S represents the transition function, which specifies the probability of one state transitioning to another after taking an action; R denotes the space of the received reward upon taking an action a ∈ A under state s ∈ S (in this paper, we assume that the state and action spaces are finite, but the numbers of states and actions could be large); and γ ∈ [0, 1) is a time-discount factor. For RL problems over an infinite discrete-time horizon {t ∈ N}, the learning agent executes an action at according to the state st and some policy π : S → A. The system then transitions into a new random state st+1 in the next time-slot. Also, the agent receives a random reward Rπ(st, at). The trajectory generated by a policy π is a sequence of state-action pairs denoted as {s1,a1,s2,a2,. . .}. The goal of the agent is to learn an optimal policy π∗ to maximize the long-term discounted total reward. Specifically, for a policy π (could be a randomized policy), the expected reward received by the agent at state s in any given time-slot can be computed as Rπ(st) = Ea∼π(·|s) [ Rπ(st, a) ] . The value
function V π (s0) = E [ ∑∞ t=0 γ
tR (st) | s0, π] indicates the long-term discounted reward of policy π over an infinite horizon with the initial state at s0 ∈ S. Also, the Bellman equation implies that V π(·): V (s)=T πV (s), where T πf(s) , E[Rπ(s) + γf(s′)|a ∼ π(·|s), s′ ∼ P (·|s, a)] denotes the Bellman operator. In RL, the agent’s goal is to determine an optimal policy π∗ that maximizes the value function V π(s) from any initial state s.
However, the first obstacle in solving RL problems stems from evaluating V π(·) for a given π since P (·|s, a) is unknown. Moreover, it is often infeasible to store V π(s) since the state space S could be large. To address these challenges, one popular approach in RL is to approximate V π(·) using a family of parametric and smooth functions in the form of V π(·) ≈ Vθπ (·), where θπ ∈ Rd is a d-dimensional parameter vector. Here, Θ is a compact subspace. For notational simplicity, we will omit all superscripts “π” whenever the policy π is clear from the context. In this paper, we focus on nonlinear function approximation, i.e., Vθ(·) : S → R is a nonlinear function with respect to (w.r.t.) θ. For example, Vθ(·) could be based on a θ-parameterized nonlinear DNN. We assume that the gradient and Hessian of Vθ(·) exist and are denoted as: gθ(s) := ∇θVθ(s) ∈ Rd, Hθ(s) := ∇2θVθ(s) ∈ Rd×d.
Our goal is to find the optimal parameter θ∗ ∈ Rd that minimizes the error between Vθ∗(·) and V (·). This problem can be formulated as minimizing the mean-squared projected Bellman error (MSPBE) of the value function as follows (Liu et al., 2018):
MSPBE(θ) := 1
2 ∥∥Es∼Dπ(·)[(T πVθ(s)−Vθ(s))∇θVθ(s)>]∥∥2D−1 = max ω∈Rd ( − 1 2 Es∼Dπ(·)[(ω>gθ(s))2] + 〈ω,Es∼Dπ(·) [ (T πVθ(s)−Vθ(s))gθ(s) ] 〉 ) , (1)
where Dπ(·) is the stationary distribution of under policy π andD = Es∼Dπ [gθ(s)g>θ (s)] ∈ Rd×d. 2) Primal-Dual Optimization for MSPBE: It is shown in (Liu et al., 2018) (cf. Proposition 1) that minimizing MSPBE(θ) in (1) is equivalent to solving a primal-dual minimax optimization problem:
min θ∈Rd max ω∈Rd L(θ,ω), (2)
where L(θ,ω) , 〈ω,Es∼Dπ(·) [ (T πVθ(s)−Vθ(s))gθ(s)> ] 〉− 12Es∼Dπ(·)[(ω >gθ(s)) 2]. Since the distribution Dπ(·) is unknown and the expectation cannot be evaluated directly, one often considers the following empirical minimax problem by replacing the expectation in L(θ,ω) with a finite sample average approximation in the stochastic objective function based on an M -step trajectory, i.e., minθ∈Rd maxω∈Rd L(θ,ω)=minθ∈Rd maxω∈Rd 1M ∑M i=1 Li(θ,ω), where
Li(θ,ω) := 〈ω,[R(si, ai, si+1) + γVθ(si+1)− Vθ(si)]× gθ(si)〉 − 1
2 (ω>gθ(si)) 2. (3)
Solving the above empirical minimax problem for MSPBE constitutes the rest of this paper.
4 SOLUTION APPROACH
As mentioned in Section 3, based on an M -step trajectory {s1, a1 · · · , sM , aM , sM+1} generated by some policy π, our goal is to solve the empirical primal-dual and finite-sum optimization problem:
min θ∈Rd max ω∈W
1
M M∑ i=1 Li(θ,ω) = min θ∈Rd max ω∈W L(θ,ω), (4)
where W is assumed to be a convex constrained set (Problem (4) becomes Problem (2) when W = Rd). In our Appendix, we also discussed the min-max problem while θ ∈ Θ. Θ is a convex constrained set. See details in Appendix. 12. Note that Problem (4) could be non-convex (e.g., DNN-based nonlinear approximation). Let J(θ) , maxω∈W L(θ,ω). Then, we can equivalently rewrite Problem (4) as follows: minθ∈Rd maxω∈W L(θ,ω) = minθ∈Rd J(θ). Note from (3) that L(θ,ω) is strongly concave w.r.t. ω, which guarantees the existence and uniqueness of the solution to the problem maxω∈W L(θ,ω),∀θ ∈ Rd. Then, given θ ∈ Rd, we define the following notation: ω∗(θ) := argmax
ω∈W L(θ,ω). Thus, J(θ) can be further written as:
J(θ) = L(θ,ω∗) = max ω∈W L(θ,ω). (5)
The function J(θ) can be viewed as a finite empirical version of MSPBE. We aim to minimize J(θ) by finding the stationary point of L(θ,ω). To simplify the notaion, we use ω∗ to denote ω∗(θ). Note that ifD in Eq. (1) is positive definite, Problem (4) is strongly concave in ω, but non-convex in θ in general due to the non-convexity of function Vθ . Thus, the stated primal-dual objective function is a NCSC optimization problem. In this paper, we make the following assumptions: Assumption 1 (µ-Strongly Concavity). The differentiable function L(θ,ω) is µ-strongly concave in ω: if L(θ,ω) ≤ L(θ,ω′) +∇ωL(θ,ω′)>(ω − ω′)− µ2 ‖ω − ω
′‖2, ∀ω,ω′ ∈ Rd, µ > 0 and any fixed θ ∈ Rd. The above mentioned condition is equivalent to : ‖∇ωL(θ,ω) −∇ωL(θ,ω′)‖ ≥ µ‖ω − ω′‖, ∀ω,ω′ ∈ Rd. Similar proofs can be found in Lemma 2 and 3 in Zhou (2018). Assumption 2 (Lf -Smoothness). For i = 1, 2, . . . ,M , both gradient ∇θLi(θ,ω) and ∇ωLi(θ,ω) are Lf -smooth. That is, for all θ,θ′ ∈ Rd and ω,ω′ ∈ Rd, there exists a constant Lf > 0 such that ‖∇Li(θ,ω)−∇Li(θ′,ω′)‖ ≤ Lf ( ‖θ − θ′‖+ ‖ω − ω′‖ ) .
Algorithm 1 The Variance-Reduced Primal-Dual Stochastic Gradient Method (VRPD). Input: An M -step trajectory of the state-action pairs {s1, a1, s2, a2, · · · , sM , aM , sM+1} generated
from a given policy; step sizes α, β ≥ 0; initialization points θ0 ∈ Rd, ω0 ∈ W . Output: (θ(K̃),ω(K̃)), where K̃ is independently and uniformly picked from {1, · · · ,K};
1: for k = 0, 1, 2, · · · ,K − 1 do 2: If mod(k, q) = 0, compute full gradients G(k)θ , G (k) ω as in Eq. (6). 3: Otherwise, select S samples independently and uniformly from [M ], and compute gradients as in Eq. (7). 4: Perform the primal-dual updates to obtain the next iterate θ(k+1),ω(k+1) as in Eq. (8). 5: end for
Assumption 3 (Bounded Variance). There exists a constant σ > 0 such that for all θ ∈ Rd,ω ∈ Rd, 1 M ∑M i=1 ‖∇θLi(θ,ω)−∇θL(θ,ω)‖2 ≤ σ2 and 1 M ∑M i=1 ‖∇ωLi(θ,ω)−∇ωL(θ,ω)‖2 ≤ σ2.
In the above assumptions, Assumption 1 is satisfied if the number of samples M is sufficiently large and coupling with the fact that the matrixD is positive definite. To see that, note that µ=λmin (D) > 0, where D = Es [ ∇θVθ(s)∇θVθ(s)> ] ∈ Rd×d and D tends to be full-rank as M increases. Thus, as soon as we find a µ > 0 when M is sufficiently large, this µ is independent of M as M continues to increase. Assumption 2 is standard in the optimization literature. Assumption 3 is also commonly adopted for proving convergence results of SGD- and VR-based algorithms, or algorithms that draw a mini-batch of samples instead of all samples. Assumption 3 is guaranteed to hold under the compact set condition and common for stochastic approximation algorithms for minimax optimization (Qiu et al., 2020; Lin et al., 2020a). Assumptions 1–3 are also general assumptions often used in temporal difference (TD) problems (see, e.g., (Qiu et al., 2020; Wai et al., 2019)). With these assumptions, we are now in a position to present our algorithms and their convergence performance results.
4.1 THE VARIANCE-REDUCED PRIMAL-DUAL METHOD
In this section, we first present the variance-reduced primal-dual (VRPD) algorithm for solving policy evaluation problems, followed by the theoretical convergence results. Due to space limitation, we provide a proof sketch in the main text and relegate the proof to the supplementary material.
1) Algorithm Description: The full description of VRPD is illustrated in Algorithm 1. In VRPD, for every q iterations, the algorithm calculates the full gradients as follows:
G (k) θ =
1 |M | ∑ i∈M ∇θLi(θ(k),ω(k)); G(k)ω = 1 |M | ∑ i∈M ∇ωLi(θ(k),ω(k)). (6)
In all other iterations, VRPD selects a batch of samples S and computes variance-reduced gradient estimators as:
G (k) θ = 1 |S| ∑ i∈S ( ∇θLi(θ(k),ω(k))−∇θLi(θ(k−1),ω(k−1)) +G(k−1)θ ) ; (7a)
G(k)ω = 1 |S| ∑ i∈S ( ∇ωLi(θ(k),ω(k))−∇ωLi(θ(k−1),ω(k−1)) +G(k−1)ω ) . (7b)
The estimators in (7) are constructed iteratively based on the previous update information ∇θLi(θ(k−1),ω(k−1)) (resp. (∇ωLi(θ(k−1),ω(k−1)) ) and G(k−1)θ (resp. G (k−1) ω ). VRPD updates the primal and dual variables as follows:
θ(k+1) = θ(k) − βG(k)θ ; (8a)
ω(k+1) = PW(ω(k) + αG(k)ω ) = argmin ω̃∈Ω ‖ω̃ − (ω(k) + αG(k)ω )‖2, (8b)
where the parameters α and β are constant learning rates for primal and dual updates, respectively.
2) Convergence Performance: In this paper, we propose a new metric for convergence analysis:
M(k) := ‖∇J(θ(k)) ‖2 + 2‖ω(k) − ω∗(θ(k))‖2. (9) The first term in (9) measures the convergence of the primal variable θ. As common in nonconvex optimization analysis, ‖∇J(θ)‖2 = 0 indicates that θ is a first-order stationary point
(FOSP) of Problem (4). The second term in (9) measures the convergence of ω(k) to the unique maximizer ω(k)∗ for L(θk, ·). Note that if Problem (4) is unconstrained in dual (i.e., ω ∈ Rd), it follows from Assumption 2 and ‖∇ωL(θ(k),ω∗(θ(k)))‖2 = 0 that M(k) ≥ ‖∇J(θ(k)) ‖2 + 2 L2f ‖∇ωL(θ(k),ω(k))‖2. We now introduce the notion of the approximate first-order stationary points. We say that point {θ,ω} is an -stationary point of function L(θ,ω) if M ≤ is satisfied. Remark. Several important remarks on the connections between our metric M(k) and the conventional convergence metrics in the literature are in order. A conventional convergence metric in the literature for NCSC minimax optimization is ‖∇J(θ(k)) ‖2 (Lin et al., 2020a; Luo et al., 2020; Zhang et al., 2021), which is the first term of M(k) and measures the convergence of the primal variable θ under a given dual variable ω. This is because ‖∇J(θ)‖2 = 0 implies that θ is a FOSP. The novelty in our convergence metric is the second term in M(k), which measures the convergence of ωk to the unique maximizer ω∗k for L(θk, ·). Another conventional convergence metric in the literature of minimizing the empirical MSPBE problem is ‖∇θL(θ,ω)‖2 + ‖∇ωL(θ,ω)‖2 (Tsitsiklis & Van Roy, 1997). Since the nonconvexstrong-concave minimax optimization problem is unconstrained in dual (i.e., ω ∈ Rd), it follows from Lipschitz-smoothness in Assumption 2 and ‖∇ωL(θ(k),ω∗(θ(k)))‖2 = 0 that ‖ω(k)−ω∗(θ(k))‖2 ≥ 1 L2f ‖∇ωL(θ(k),ω(k))‖2. Therefore, the second term in our M(k) (2‖ω(k)−ω∗(θ(k))‖2) is an upper bound of the second term in this conventional metric (‖∇ωL(θ,ω)‖2). Thus, 2‖ω(k) − ω∗(θ(k))‖2 is a stronger metric than ‖∇ωL(θ,ω)‖2 in the sense that an O(1/K) convergence rate under M(k) implies an O(1/K) convergence rate of the conventional metric, but the converse is not true. Moreover, the benefit of using 2‖ω(k) − ω∗(θ(k))‖2 in our M(k) is that its special structure allows us to prove the O(1/K) convergence, while the second term in the conventional metric fails.
With our proposed convergence metric in (9), we have the following convergence result:
Theorem 1. Under Assumptions 1–3, choose step-sizes: α ≤ min{ 14Lf , 2µ 34L2f+2µ 2 } and β ≤ min {
1 4Lf , 1 2(Lf+L2f/µ) , µ 8 √ 17L2f , µ
2α
8 √
34L2f
} . Let q= √ M and S= √ M , it holds that:
1
K K−1∑ k=0 E[M(k)] ≤ 1 K min{1, L2f} [ 16L2f αµ C2 + 2 β C1 ] = O ( 1 K ) ,
where C1 , E[J(θ(0))]− E[J(θ(∗))] and C2 , ( E‖ω∗(θ(0))− ω(0)‖2 ) . Corollary 2. The overall stochastic sample complexity isO( √ Mκ3 −1 +M). Note that κ = Lf/µ denotes the condition number.
Remark. Theorem 1 states that VRPD achieves an O(1/K) convergence rate to an -FOSP. The most challenging part in proving Theorem 1 stems from the fact that one needs to simultaneously evaluate the progresses of the gradient descent in the primal domain and the gradient ascent in the dual domain of the minimax problem.
Toward this end, the nPD-VR method in (Wai et al., 2019) employs ‖∇ωL(θ(k),ω(k))‖2 in their metric to evaluate convergence. However, this approach yields a term F (K) , E[L(θ(0),ω(0)) − L(θ(K),ω(K))] in their convergence upper bound in the form of O(F (K)/K) (cf. Theorem 1, Eq. (26) in (Wai et al., 2019)). Since F (K) depends on K, it is unclear whether or not the nPD-VR method in (Wai et al., 2019) can achieve an O(1/K) convergence rate. This unsatisfactory result motivates us to propose a new metric M(k) in Eq. (9) to evaluate the convergence of our VRPD algorithm. The first part of our convergence metric ‖∇J(θ(k)) ‖2 measures the stationarity gap of the primal variable, while the second part 2‖ω(k) − ω∗(θ(k))‖2 measures the dual optimality gap. Consequently, we bound per-iteration change in J(θ) instead of the function L(θ(k),ω(k)). This helps us avoid the technical limitations of (Wai et al., 2019) and successfully establish the O(1/K) convergence rate, hence resolving an open problem in this area. Remark. VRPD adopts a large O(1) (i.e., constant) step-size compared to the O(1/M) step-size of nPD-VR (Wai et al., 2019), where M is the dataset size. This also induces a faster convergence. Also, VRPD’s estimator uses fresher information from the previous iteration, while VR-STSG (Qiu
Algorithm 2 Adaptive-batch VRPD method (VRPD+). Input: A trajectory of the state-action pairs {s1, a1, s2, a2, · · · , sM , aM , sM+1} generated from a
given policy; step sizes α, β ≥ 0; initialization points θ0 ∈ Θ, ω0 ∈ Rd. Output: (θ(K̃),ω(K̃)), where K̃ is independently and uniformly picked from {1, · · · ,K};
1: for k = 0, 1, 2, · · · ,K − 1 do 2: If mod(k, q) = 0, select Ns indices independently and uniformly from [M ] as in Eq. (10)
and calculate stochastic gradients as in Eq. (11); 3: Otherwise, select S independently and uniformly from [M ]; Compute gradients as in Eq. (7); 4: Perform the primal-dual updates as in Eq. (8). 5: end for
et al., 2020) and nPD-VR (Wai et al., 2019) only use the information from the beginning of q-sized windows. Collectively, VRPD makes a considerably larger progress than state-of-the-art algorithms (Qiu et al., 2020; Wai et al., 2019).
4.2 THE ADAPTIVE-BATCH VRPD METHOD (VRPD+)
Note that VRPD still requires full gradients every q iterations, which may entail a high sample complexity. Upon closer observations, we note that accurate gradient estimation plays an important role only in the later stage of the convergence process. This motivates us to further lower the sample complexity of VRPD by using adaptive batch sizes. Toward this end, we propose an adaptive-batch VRPD method (VRPD+) to lower the sample complexity of the VRPD algorithm in Algorithm 1.
1) Algorithm Description: The full description of VRPD+ is illustrated in Algorithm 2. In VRPD+, our key idea is to use the gradients calculated in the previous loop to adjust the batch size Ns of the next loop. Specifically, VRPD+ chooses Ns in the k-th iteration as:
Ns = min{cγσ2(γ(k))−1, c σ2 −1,M}, (10) where cγ , c > c for certain constant c, M denotes the size of the dataset and σ2 is the variance bound,
and γ(k+1) = ∑k i=(nk−1)q ‖G(i)θ ‖ 2
q is the stochastic gradients calculated in the previous iterations. In VRPD+, for every q iterations, we select Ns samples independently and uniformly from [M ] and compute gradient estimators as follows:
G (k) θ =
1 |Ns| ∑ i∈Ns ∇θLi(θ(k),ω(k)); G(k)ω = 1 |Ns| ∑ i∈Ns ∇θLi(θ(k),ω(k)). (11)
For other iterations, VRPD+ is exactly the same as VRPD. Next, we will theoretically show that such an adaptive batch-size scheme still retains the same convergence rate, while achieving an improved sample complexity. 2) Convergence Performance: For VRPD+, we have the following convergence performance result: Theorem 3. Under Assumptions 1–3, choose step-sizes: α ≤ min{ 14Lf , 2µ 34L2f+2µ 2 } and β ≤
min {
1 4Lf , 1 2(Lf+L2f/µ) , µ 8 √ 17L2f , µ
2α
8 √
34L2f
} . Let q = √ M ,S = √ M and cγ ≥ (288L2f/µ2 + 8) in
VRPD+, where cγ ≥ c for some constant c > 4K + 68Kβµ2 . With constants C1 , E[J(θ (0))] − E[J(θ(∗))] and C2 , ( E[‖ω∗(θ(0))− ω(0)‖2 ] ), it holds that:
1
K K−1∑ k=0 E[M(k)] ≤ 1 Kmin{1, L2f} [ K · 2 + 16L2f αµ C2 + 2 β C1 ] = O ( 1 K ) + 2 .
Corollary 4. The overall stochastic sample complexity is O( √ Mκ3 −1 +M). κ = Lf/µ denotes the condition number.
Remark. From Theorem 3, it can be seen that VRPD+ achieves the same convergence rate as that of VRPD. Since we choose the subsample set Ns instead of full gradient calculation in VRPD+, it achieves a much lower sample complexity compared to VRPD. Additionally, the convergence performance of VRPD+ is affected by the constant K 2 , which is due to the use of the adaptive batch size in each outer-loop of VRPD+. Also, it can be observed that the algorithm convergence rate is affected by the carefully chosen step-sizes α and β, because either a too small or too large step-size may have negative impact on the convergence of the algorithm.
Figure 2: Cartpole-v0 environment.
0 2 10 3 4 10 3 6 10 3 8 10 3
# of grad
10 -4
10 -2
10 0
10 2
L (
, )
VRPD+
VRPD
0 2 10 3 4 10 3 6 10 3 8 10 3
# of grad
10 -4
10 -2
10 0
10 2
|| J (
)| |2
+ ||
L (
, )| |2 VRPD+
VRPD
Figure 3: MountainCar-v0 environment. 0 500 1000 1500 2000 # of grad
-1.4
-1.2
-1
-0.8
-0.6
-0.4
-0.2
0
L (
, ) VRPD+
VRPD
0 500 1000 1500 2000
# of grad
0
0.5
1
1.5
2
|| J (
)| |2
+ ||
L (
, )|
|2
VRPD+
VRPD
Figure 4: Cartpole-v0 environment.
Remark. The proof of Theorem 3 follows from a similar approach to the proof of Theorem 1. The key difference and most challenging part of proving Theorem 3 stem from the relaxation on ‖∇θL(θ(k),ω(k)) −G(k)θ ‖2 and ‖∇ωL(θ(k),ω(k)) −G (k) ω ‖2. Thanks to the bounded variance in Assumption 3 and the selectedNs in Eq. (10), we are able to derive outer-loop bounds for primal and dual gaps, respectively. We refer readers to the Appendix for the details of the complete proof.
5 EXPERIMENTAL RESULTS
In this section, we conduct numerical experiments to verify our theoretical results. We compare our work with the basic stochastic gradient (SG) method (Lin et al., 2020b) and three state-of-the-art algorithms for PE: nPD-VR (Wai et al., 2019), STSG (Qiu et al., 2020) and VR-STSG (Qiu et al., 2020). Due to space limitation, we provide our detailed experiment settings in the Appendix.
200 600 1000 Trajectory length
0.075 0.100 0.125 0.150 0.175 M SE
Mountaincar-v0 Linear Nonlinear
200 600 1000 Trajectory length
0.10
0.15
0.20
0.25
M SE
Cartpole-v0 Linear Nonlinear
Figure 5: MSE comparison with 10 trials.
Numerical Results: We set the constant learning rates α = 10−3, β = 10−1, mini-batch size q = d √ Me, constant c = 32 and solution accuracy
= 10−3. First, we compare the loss value and gradient norm performance based on MountainCar-v0 and Cartpole-v0 with nPD-VR, SG, STSG, and VR-STSG in Figs. 1 and 2. We set the constraintW = [0, 10]n and initialize all algorithms at the same point, which is generated randomly from the normal distribution. We can see that VR-STSG and nPD-VR slowly converge after 40 epochs, while STSG and SG fail to converge. VRPD converges faster than all the other algorithms with the same step-size values. As for Cartpole-v0, we clearly see a trend of approaching zero-loss with VRPD. These results are consistent with our theoretical result that one can use a relatively large step-size with VRPD, which leads to faster convergence. Also, we compare the sample complexity of VRPD and VRPD+ in MountainCar-v0 and Cartpole-v0, and the results are shown in in Figs. 3 and 4, respectively. We can see
that VRPD+ converges to the same level with much fewer samples than VRPD does. Next, we compared the mean squared error(MSE) between the ground truth value function and the estimated value function over 10 independent runs with linear approximation and nonlinear approximation. In Fig. 5, with the same amount of parameter size, nonlinear approximation always achieves smaller MSE than linear approximation (Du et al., 2017). Further experiments on the performance of J(θ) are shown in the supplementary material.
6 CONCLUSION
In this paper, we proposed and analyzed two algorithms called VRPD and VRPD+ for policy evaluation with nonlinear approximation. The VRPD algorithm is based on a simple single-timescale framework by utilizing variance reduction techniques. The VRPD algorithm allows the use of constant step-sizes and achieves an O(1/K) convergence rate. The VRPD+ algorithm improves VRPD by further applying an adaptive batch size based on historical stochastic gradient information. Our experimental results also confirmed our theoretical findings in convergence and sample complexity. | 1. What is the main contribution of the paper regarding non-convex policy evaluation in reinforcement learning?
2. What are the strengths and weaknesses of the proposed VRPD method compared to prior works such as Du et al. (2017)?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the paper's proof techniques, convergence rate analysis, and problem dependencies? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors developed a new efficient, single-timescale variance-reduced primal-dual method (VRPD) with an emphasis on the feasibility of solving non-convex policy evaluation problem in reinforcement learning (RL) with nonlinear function approximation, also known as the value function estimation or on-policy learning with nonlinear function approximations. VRPD is a single-timescale algorithm that meets an
O
(
1
/
K
)
-rate of convergence to a first-order stationary point for general non-convex-strongly-concave (NCSC) minimax optimiztion, under mild assumptions and appropriate parameter choices. The authors claim that they provide the best-known result in the literature. An enhanced VRPD method is also proposed, namely VRPD
+
, to allow adaptive batch sizes using historical information while maintaining the theoretical convergence rate. The authors empirically validated the proposed method and concluded higher sample efficiency over comparable methods including VRPD.
Strengths And Weaknesses
Page 2, middle part. I am confused when the authors indicate "new proof techniques in convergence rate analysis". To quote words in your paper it "resolves an open question and clarifies an ambiguity in the state-of-the-art convergence analysis ...". Can you clarify what was the ambiguity and how did you clean it?
Page 3, middle part (related work). My understanding is that Du et al. (2017) study the TD(0) formulation of policy evaluation with linear function approximation, which can be re-cast as a strongly-convex-concave problem. What is the true novelty of this work, in comparison with Du et al. (2017)? Du et al. (2017) solved this problem via VR-based methods and achieves a linear convergence rate via the gradient descent-ascent type method. Is it correct that nonlinear function approximation leads to possible non-concavity?
Page 4, first paragraph. SREDA adopts "PiSARAH" subroutine but the latter is not detailed.
Page 4, second paragraph. It should be desirable if the dependency on all problem-dependent constants should be considered when discussing matching the lower bounds. At first glance, it seems that the method is not sharp in terms of
κ
, and this is natural due to that no acceleration method is adopted. In my opinion, even if the resulting upper bound failed to match the lower bound by Zhang et al. (2021) in certain regimes, some discussions in fine-grained angles should be added.
Page 7, middle part (Remark after Theorem 1). The author wrote "Theorem 1 states that VRPD achieves an
O
(
1
/
K
)
convergence rate to an
ϵ
-FOSP". The complexity and rate of convergence have been awkwardly mixed together and caused confusion. One should modify it accordingly. I like the more detailed stochastic sample complexity of
O
(
M
+
M
κ
ϵ
−
1
)
where
κ
denotes the condition number of the strongly convex part. Also, "The most challenging part in proving Theorem 1 stems from the fact that one needs to simultaneously evaluate the progress of the gradient descent in the primal domain and the gradient ascent in the dual domain of the minimax problem". This is merely a standard technique and is well-known in recent minimax optimization.
Some reference bibitems are duplicated, e.g. Sutton et al. Authors are encouraged to clean them when preparing their next version.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written with words and sentences of good quality, and the single-timescale method for the given problem is novel in literature to my best knowledge. The complexity result matches the one in Luo et al in the regime
κ
=
O
(
1
)
, and hence somewhat expected. |
ICLR | Title
On $\mathcal{O}(1/K)$ Convergence and Low Sample Complexity for Single-Timescale Policy Evaluation with Nonlinear Function Approximation
Abstract
Learning an accurate value function for a given policy is a critical step in solving reinforcement learning (RL) problems. So far, however, the convergence speed and sample complexity performances of most existing policy evaluation algorithms remain unsatisfactory, particularly with non-linear function approximation. This challenge motivates us to develop a new variance-reduced primal-dual method (VRPD) that is able to achieve a fast convergence speed for RL policy evaluation with nonlinear function approximation. To lower the high sample complexity limitation of variance-reduced approaches (due to the periodic full gradient evaluation with all training data), we further propose an enhanced VRPD method with an adaptive-batch adjustment (VRPD). The main features of VRPD include: i) VRPD allows the use of constant step sizes and achieves the O(1/K) convergence rate to the first-order stationary points of non-convex policy evaluation problems; ii) VRPD is a generic single-timescale algorithm that is also applicable for solving a large class of non-convex strongly-concave minimax optimization problems; iii) By adaptively adjusting the batch size via historical stochastic gradient information, VRPD is more sample-efficient empirically without loss of theoretical convergence rate. Our extensive numerical experiments verify our theoretical findings and showcase the high efficiency of the proposed VRPD and VRPD algorithms compared with the state-of-the-art methods.
1 INTRODUCTION
In recent years, advances in reinforcement learning (RL) have achieved enormous successes in a large number of areas, including healthcare (Petersen et al., 2019; Raghu et al., 2017b), financial recommendation (Theocharous et al., 2015), resources management (Mao et al., 2016; Tesauro et al., 2006) and robotics (Kober et al., 2013; Levine et al., 2016; Raghu et al., 2017a), to name just a few. In RL applications, an agent interacts with an environment and repeats the tasks of observing the current state, performing a policy-based action, receiving a reward, and transition to the next state. A key step in many RL algorithms is the policy evaluation (PE) problem, which aims to learn the value function that estimates the expected long-term accumulative reward for a given policy. Value functions not only explicitly provide the agent’s accumulative rewards, but could also be utilized to update the current policy so that the agent can visit valuable states more frequently (Bertsekas & Tsitsiklis, 1995; Lagoudakis & Parr, 2003). In RL policy evaluation, two of the most important performance metrics are convergence rate and sample complexity. First, since policy evaluation is a subroutine of an overall RL task, developing fast-converging policy evaluation algorithms is of critical importance to the overall efficiency of RL. Second, due to the challenges in collecting a large number of training samples (trajectories of state-action pairs) for policy evaluations in RL, reducing the number of samples (i.e., sample complexity) can significantly alleviate the burden of data collection for solving policy evaluation problems. These two important aspects motivate us to pursue a fast-converging policy evaluation algorithm with a low sample-complexity in this paper.
Among various algorithms for policy evaluation, one of the simplest and most effective methods is the temporal difference (TD) learning approach (Sutton, 1988). Instead of focusing on the predicted and actual outcomes, the key idea of the TD learning is to make the difference between temporally successive predictions small. Specifically, the TD learning approach learns the value function by
using the Bellman equation to bootstrap from the current estimated value function. To date, there have been many algorithms proposed within the family of TD learning (Dann et al., 2014). However, most of these methods suffer from either a unstable convergence performance, (e.g., TD(λ) (Sutton, 1988) for off-policy training) or a high computational complexity (e.g., least-squares temporal difference (LSTD) (Boyan, 2002; Bradtke & Barto, 1996)) in training with massive features. The limitation of these early attempts is largely due to the fact that they do not leverage the gradient-oracle in policy evaluation. Thus, in recent years, gradient-based policy evaluation algorithms have become increasingly prevalent. However, the design of efficient gradient-based policy evaluation algorithm is a non-trivial task. On one hand, as an RL task becomes more sophisticated, it is more appropriate to utilize nonlinear function approximation (e.g., deep neural network (DNN)) to model the value function. However, when working with nonlinear DNN models, the convergence performance of the conventional single-timescale TD algorithms may not be guaranteed (Tsitsiklis & Van Roy, 1997). To address this issue, some convergent two-timescale algorithms (Bhatnagar et al., 2009; Chung et al., 2018) have been proposed at the expense of higher implementation complexity. On the other hand, modern policy evaluation tasks could involve a large amount of state transition data. To perform policy evaluation, algorithms typically need to calculate full gradients that require all training data (e.g., gradient temporal difference (GTD) (Sutton et al., 2008) and TD with gradient correction (TDC) (Sutton et al., 2009b)), which entails a high sample complexity. So far, existing works on PE are either focus on linear approximation (GTD2 (Sutton et al., 2009b), PDBG (Du et al., 2017), SVRG (Du et al., 2017), SAGA (Du et al., 2017)) or have such a slower convergence performance (STSG (Qiu et al., 2020), VR-STSG (Qiu et al., 2020), nPD-VR (Wai et al., 2019)) (see detailed discussions in Section. 2). In light of the above limitations, in this paper, we ask the following question: Could we develop an efficient single-timescale gradient-based algorithm for policy evaluation based on nonlinear function approximation?
In this paper, we give an affirmative answer to the above question. Specifically, we propose an efficient gradient-based variance-reduced primal-dual algorithm (VRPD) to tackle the policy evaluation problem with nonlinear function approximation, which we recast as a minimax optimization problem. Our VRPD algorithm admits a simple and elegant single-timescale algorithmic structure. Then, we further enhance VRPD by proposing VRPD+, which uses adaptive batch sizes to relax the periodic full gradient evaluation to further reduce sample complexity. The main contribution of this paper is that our proposed algorithms achieve an O(1/K) convergence rate (K is the number of iterations) with constant step-sizes for policy evaluation with nonlinear function approximation, which is the best-known result in the literature thus far. Our main results are highlighted as follows:
• By utilizing a variance reduction technique, our VRPD algorithm allows constant step-sizes and enjoys a low sample complexity. We show that, under mild assumptions and appropriate parameter choices, VRPD achieves an O(1/K) convergence rate to the first-order stationary point of a class of nonconvex-strongly-concave(NCSC) minimax problems, which is the best-known result in the literature. To achieve this result, our convergence rate analysis introduces new proof techniques and resolves an open question and clarifies an ambiguity in the state-of-the-art convergence analysis of VR-based policy evaluation methods (see 2nd paragraph in Section 2.1 for more discussions).
• VRPD+ significantly improves the sample complexity of the VRPD algorithm for policy evaluation with massive datasets. Our VRPD+ (adaptive-batch VRPD) algorithm incorporates historical information along the optimization path, but does not involve backtracking and condition verification. We show that our VRPD+ algorithm significantly reduces the number of samples and the computation loads of gradients, thanks to our proposed adaptive batch size technique that is able to avoid full gradient evaluation.
• Our extensive experimental results also confirm that our algorithms outperform the state-of-theart gradient-based policy evaluation algorithms, and our VRPD+ can further reduce the sample complexity compared to the VRPD algorithm. It is worth noting that, although the focus of our work is on RL policy evaluation, our algorithmic design and proof techniques contribute to the area of minimax optimization and could be of independent theoretical interest.
2 RELATED WORK
1) TD Learning with Function Approximation for Policy Evaluation: TD learning with function approximation plays a vital role in policy evaluation for RL. The key idea of TD learning is to
minimize the Bellman error for approximating the value function. So far, most existing TD learning algorithms with theoretical guarantees focus on the linear setting (e.g., (Sutton et al., 2009a; Srikant & Ying, 2019; Xu et al., 2020b; Stankovic & Stankovic, 2016; Touati et al., 2018)). Doan et al. (2019), Liu et al. (2015), Macua et al. (2014), and Zhang & Xiao (2019) provided a finite-time analysis for the proposed distributed TD(0) and showed that the convergence rate of their algorithm is O(1/K). It was shown in Du et al. (2017) that policy evaluation with linear function approximation by TD(0) can be formulated as a strongly convex-concave or convex-concave problem, and can be solved by a primal-dual method with a linear convergence rate. However, the linearity assumption cannot be applied in a wide range of policy evaluations with nonlinear models. TD learning with nonlinear (smooth) function approximation is far more complex. Maei et al. (2009) was among the first to propose a general framework for minimizing the generalized mean-squared projected Bellman error (MSPBE) with smooth and nonlinear value functions. However, they adopted twotimescale step-sizes but only obtained a slow convergence performance. Other TD methods with nonlinear function approximations for policy evaluations include (Wang et al., 2017; 2016). Qiu et al. (2020) also investigated nonlinear TD learning and proposed two single-timescale first-order stochastic algorithms. However, the convergence rate of their STSG and VR-STSG are O(1/K1/4) and O(1/K1/3), while our VRPD algorithm achieves a much faster O(1/K) convergence rate. In policy evaluation with non-linear function approximation, the state-of-the-art and the most related work to ours is (Wai et al., 2019), which showed that minimizing the generalized MSPBE problem is equivalent to solving a non-convex-strongly-concave (NCSC) minimax optimization problem via the Fenchel’s duality. However, their best convergence results only hold when the step-size is O( 1M ), where M is the size of the dataset. This is problematic for modern RL problems with a large state-action transition dataset. More importantly, although their convergence theorem appears to have a 1K factor (K being the total number of iterations), their convergence rate bound is in the form of F (K)+Constant1 K·Constant2 (cf. Theorem 1, Eq. (26) in Wai et al. (2019)). Notably, the F
(K) term in the denominator in Eq. (26) inherently depends on the primal and dual values θ(K) and ω(K) in the K-th iteration, respectively. It is unclear whether ω(K) can be bounded in (Wai et al., 2019), hence leading to an ambiguity in guaranteeing an O(1/K) convergence rate. Thus, whether an O(1/K) convergence rate is achievable in single-timescale policy evaluation with nonlinear function approximation and constant step-sizes remains an open question thus far. The key contribution and novelty in this paper is that we resolve the above open question by proposing two new algorithms, both achieving anO(1/K) convergence rate. To establish this result, we propose a new convergence metric (cf. Eq. (9) in Section 4.1), which necessitates new proof techniques and analysis. For easy comparisons, we summarize our algorithms and the related works in Table 1.
2) Relations with NCSC Minimax Optimization: Although the focus of our paper is on RL policy evaluation, our algorithmic techniques are also related to the area of NCSC minimax optimization due to the primal-dual MSPBE formulation (cf. Eq. (2) in Section 3). Early attempts in (Nouiehed et al., 2019; Lin et al., 2020b) developed gradient descent-ascent algorithms to solve the NCSC minimax problems. However, these methods suffer from a high sample complexity and slow convergence rate. To overcome this limitation, two variance-reduction algorithms named SREDA (Luo et al., 2020) are proposed for solving NCSC minimax problems, which shares some similarity to our work.
Later, Xu et al. (2020a) enhanced SREDA to allow bigger step-sizes. However, our algorithms still differ from SREDA in the following key aspects: (i) Our algorithms are single-timescale algorithms, which are much easier to implement. In comparison, SREDA is a two-timescale algorithm, where solving an inner concave maximization subproblem is needed. Thus, to a certain extent, SREDA can be viewed as a triple-loop structure, and hence the computational complexity of SREDA is higher than ours; (ii) In the initialization stage, SREDA uses the PiSARAH, which is a subroutine that aims to help the SREDA algorithm achieve the desired accuracy at the initialization step and can be seen as an additional step to solve an inner concave maximization subproblem. Thus, SREDA has a higher computation cost than our paper. (iii) The number of parameters in SREDA are far more than ours and it requires the knowledge of the condition number to set the algorithm’s parameters for good convergence performance. By contrast, our algorithms only require step-sizes α and β to be sufficiently small, which is easier to tune in practice. (iv) SREDA does not provide an explicit convergence rate in their paper (it is unclear what their convergence rate is from their proof either). Yet, we show that our VRPD in theory has a lower sample complexity than that of SREDA.
Another related work in terms of NCSC minimax optimization is (Zhang et al., 2021), which also provided sample complexity upper and lower bounds. However, there remains a gap between the sample complexity lower and upper bounds in (Zhang et al., 2021). By contrast, the sample complexity of our VRPD algorithm matches the lower bound O(M + √ M −2) in (Zhang et al., 2021), which is the first in the literature. Furthermore, the algorithm contains an inner minimax subproblem (cf. Line 6 of Algorithm 1 in Zhang et al. (2021)). Solving such a subproblems in the inner loop incurs high computational costs. Due to this reason, the algorithm in (Zhang et al., 2021) had to settle for an inexact solution, which hurts the convergence performance in practice. In contrast, our algorithm does not have such a limitation.
3 PRELIMINARIES AND PROBLEM STATEMENT
We start from introducing the necessary background of reinforcement learning, with a focus on the policy evaluation problem based on nonlinear function approximation.
1) Policy Evaluation with Nonlinear Approximation: RL problems are formulated using the Markov decision process (MDP) framework defined by a five-tuple {S,A, P, γ,R}, where S denotes the state space and A is the action space; P : S ×A → S represents the transition function, which specifies the probability of one state transitioning to another after taking an action; R denotes the space of the received reward upon taking an action a ∈ A under state s ∈ S (in this paper, we assume that the state and action spaces are finite, but the numbers of states and actions could be large); and γ ∈ [0, 1) is a time-discount factor. For RL problems over an infinite discrete-time horizon {t ∈ N}, the learning agent executes an action at according to the state st and some policy π : S → A. The system then transitions into a new random state st+1 in the next time-slot. Also, the agent receives a random reward Rπ(st, at). The trajectory generated by a policy π is a sequence of state-action pairs denoted as {s1,a1,s2,a2,. . .}. The goal of the agent is to learn an optimal policy π∗ to maximize the long-term discounted total reward. Specifically, for a policy π (could be a randomized policy), the expected reward received by the agent at state s in any given time-slot can be computed as Rπ(st) = Ea∼π(·|s) [ Rπ(st, a) ] . The value
function V π (s0) = E [ ∑∞ t=0 γ
tR (st) | s0, π] indicates the long-term discounted reward of policy π over an infinite horizon with the initial state at s0 ∈ S. Also, the Bellman equation implies that V π(·): V (s)=T πV (s), where T πf(s) , E[Rπ(s) + γf(s′)|a ∼ π(·|s), s′ ∼ P (·|s, a)] denotes the Bellman operator. In RL, the agent’s goal is to determine an optimal policy π∗ that maximizes the value function V π(s) from any initial state s.
However, the first obstacle in solving RL problems stems from evaluating V π(·) for a given π since P (·|s, a) is unknown. Moreover, it is often infeasible to store V π(s) since the state space S could be large. To address these challenges, one popular approach in RL is to approximate V π(·) using a family of parametric and smooth functions in the form of V π(·) ≈ Vθπ (·), where θπ ∈ Rd is a d-dimensional parameter vector. Here, Θ is a compact subspace. For notational simplicity, we will omit all superscripts “π” whenever the policy π is clear from the context. In this paper, we focus on nonlinear function approximation, i.e., Vθ(·) : S → R is a nonlinear function with respect to (w.r.t.) θ. For example, Vθ(·) could be based on a θ-parameterized nonlinear DNN. We assume that the gradient and Hessian of Vθ(·) exist and are denoted as: gθ(s) := ∇θVθ(s) ∈ Rd, Hθ(s) := ∇2θVθ(s) ∈ Rd×d.
Our goal is to find the optimal parameter θ∗ ∈ Rd that minimizes the error between Vθ∗(·) and V (·). This problem can be formulated as minimizing the mean-squared projected Bellman error (MSPBE) of the value function as follows (Liu et al., 2018):
MSPBE(θ) := 1
2 ∥∥Es∼Dπ(·)[(T πVθ(s)−Vθ(s))∇θVθ(s)>]∥∥2D−1 = max ω∈Rd ( − 1 2 Es∼Dπ(·)[(ω>gθ(s))2] + 〈ω,Es∼Dπ(·) [ (T πVθ(s)−Vθ(s))gθ(s) ] 〉 ) , (1)
where Dπ(·) is the stationary distribution of under policy π andD = Es∼Dπ [gθ(s)g>θ (s)] ∈ Rd×d. 2) Primal-Dual Optimization for MSPBE: It is shown in (Liu et al., 2018) (cf. Proposition 1) that minimizing MSPBE(θ) in (1) is equivalent to solving a primal-dual minimax optimization problem:
min θ∈Rd max ω∈Rd L(θ,ω), (2)
where L(θ,ω) , 〈ω,Es∼Dπ(·) [ (T πVθ(s)−Vθ(s))gθ(s)> ] 〉− 12Es∼Dπ(·)[(ω >gθ(s)) 2]. Since the distribution Dπ(·) is unknown and the expectation cannot be evaluated directly, one often considers the following empirical minimax problem by replacing the expectation in L(θ,ω) with a finite sample average approximation in the stochastic objective function based on an M -step trajectory, i.e., minθ∈Rd maxω∈Rd L(θ,ω)=minθ∈Rd maxω∈Rd 1M ∑M i=1 Li(θ,ω), where
Li(θ,ω) := 〈ω,[R(si, ai, si+1) + γVθ(si+1)− Vθ(si)]× gθ(si)〉 − 1
2 (ω>gθ(si)) 2. (3)
Solving the above empirical minimax problem for MSPBE constitutes the rest of this paper.
4 SOLUTION APPROACH
As mentioned in Section 3, based on an M -step trajectory {s1, a1 · · · , sM , aM , sM+1} generated by some policy π, our goal is to solve the empirical primal-dual and finite-sum optimization problem:
min θ∈Rd max ω∈W
1
M M∑ i=1 Li(θ,ω) = min θ∈Rd max ω∈W L(θ,ω), (4)
where W is assumed to be a convex constrained set (Problem (4) becomes Problem (2) when W = Rd). In our Appendix, we also discussed the min-max problem while θ ∈ Θ. Θ is a convex constrained set. See details in Appendix. 12. Note that Problem (4) could be non-convex (e.g., DNN-based nonlinear approximation). Let J(θ) , maxω∈W L(θ,ω). Then, we can equivalently rewrite Problem (4) as follows: minθ∈Rd maxω∈W L(θ,ω) = minθ∈Rd J(θ). Note from (3) that L(θ,ω) is strongly concave w.r.t. ω, which guarantees the existence and uniqueness of the solution to the problem maxω∈W L(θ,ω),∀θ ∈ Rd. Then, given θ ∈ Rd, we define the following notation: ω∗(θ) := argmax
ω∈W L(θ,ω). Thus, J(θ) can be further written as:
J(θ) = L(θ,ω∗) = max ω∈W L(θ,ω). (5)
The function J(θ) can be viewed as a finite empirical version of MSPBE. We aim to minimize J(θ) by finding the stationary point of L(θ,ω). To simplify the notaion, we use ω∗ to denote ω∗(θ). Note that ifD in Eq. (1) is positive definite, Problem (4) is strongly concave in ω, but non-convex in θ in general due to the non-convexity of function Vθ . Thus, the stated primal-dual objective function is a NCSC optimization problem. In this paper, we make the following assumptions: Assumption 1 (µ-Strongly Concavity). The differentiable function L(θ,ω) is µ-strongly concave in ω: if L(θ,ω) ≤ L(θ,ω′) +∇ωL(θ,ω′)>(ω − ω′)− µ2 ‖ω − ω
′‖2, ∀ω,ω′ ∈ Rd, µ > 0 and any fixed θ ∈ Rd. The above mentioned condition is equivalent to : ‖∇ωL(θ,ω) −∇ωL(θ,ω′)‖ ≥ µ‖ω − ω′‖, ∀ω,ω′ ∈ Rd. Similar proofs can be found in Lemma 2 and 3 in Zhou (2018). Assumption 2 (Lf -Smoothness). For i = 1, 2, . . . ,M , both gradient ∇θLi(θ,ω) and ∇ωLi(θ,ω) are Lf -smooth. That is, for all θ,θ′ ∈ Rd and ω,ω′ ∈ Rd, there exists a constant Lf > 0 such that ‖∇Li(θ,ω)−∇Li(θ′,ω′)‖ ≤ Lf ( ‖θ − θ′‖+ ‖ω − ω′‖ ) .
Algorithm 1 The Variance-Reduced Primal-Dual Stochastic Gradient Method (VRPD). Input: An M -step trajectory of the state-action pairs {s1, a1, s2, a2, · · · , sM , aM , sM+1} generated
from a given policy; step sizes α, β ≥ 0; initialization points θ0 ∈ Rd, ω0 ∈ W . Output: (θ(K̃),ω(K̃)), where K̃ is independently and uniformly picked from {1, · · · ,K};
1: for k = 0, 1, 2, · · · ,K − 1 do 2: If mod(k, q) = 0, compute full gradients G(k)θ , G (k) ω as in Eq. (6). 3: Otherwise, select S samples independently and uniformly from [M ], and compute gradients as in Eq. (7). 4: Perform the primal-dual updates to obtain the next iterate θ(k+1),ω(k+1) as in Eq. (8). 5: end for
Assumption 3 (Bounded Variance). There exists a constant σ > 0 such that for all θ ∈ Rd,ω ∈ Rd, 1 M ∑M i=1 ‖∇θLi(θ,ω)−∇θL(θ,ω)‖2 ≤ σ2 and 1 M ∑M i=1 ‖∇ωLi(θ,ω)−∇ωL(θ,ω)‖2 ≤ σ2.
In the above assumptions, Assumption 1 is satisfied if the number of samples M is sufficiently large and coupling with the fact that the matrixD is positive definite. To see that, note that µ=λmin (D) > 0, where D = Es [ ∇θVθ(s)∇θVθ(s)> ] ∈ Rd×d and D tends to be full-rank as M increases. Thus, as soon as we find a µ > 0 when M is sufficiently large, this µ is independent of M as M continues to increase. Assumption 2 is standard in the optimization literature. Assumption 3 is also commonly adopted for proving convergence results of SGD- and VR-based algorithms, or algorithms that draw a mini-batch of samples instead of all samples. Assumption 3 is guaranteed to hold under the compact set condition and common for stochastic approximation algorithms for minimax optimization (Qiu et al., 2020; Lin et al., 2020a). Assumptions 1–3 are also general assumptions often used in temporal difference (TD) problems (see, e.g., (Qiu et al., 2020; Wai et al., 2019)). With these assumptions, we are now in a position to present our algorithms and their convergence performance results.
4.1 THE VARIANCE-REDUCED PRIMAL-DUAL METHOD
In this section, we first present the variance-reduced primal-dual (VRPD) algorithm for solving policy evaluation problems, followed by the theoretical convergence results. Due to space limitation, we provide a proof sketch in the main text and relegate the proof to the supplementary material.
1) Algorithm Description: The full description of VRPD is illustrated in Algorithm 1. In VRPD, for every q iterations, the algorithm calculates the full gradients as follows:
G (k) θ =
1 |M | ∑ i∈M ∇θLi(θ(k),ω(k)); G(k)ω = 1 |M | ∑ i∈M ∇ωLi(θ(k),ω(k)). (6)
In all other iterations, VRPD selects a batch of samples S and computes variance-reduced gradient estimators as:
G (k) θ = 1 |S| ∑ i∈S ( ∇θLi(θ(k),ω(k))−∇θLi(θ(k−1),ω(k−1)) +G(k−1)θ ) ; (7a)
G(k)ω = 1 |S| ∑ i∈S ( ∇ωLi(θ(k),ω(k))−∇ωLi(θ(k−1),ω(k−1)) +G(k−1)ω ) . (7b)
The estimators in (7) are constructed iteratively based on the previous update information ∇θLi(θ(k−1),ω(k−1)) (resp. (∇ωLi(θ(k−1),ω(k−1)) ) and G(k−1)θ (resp. G (k−1) ω ). VRPD updates the primal and dual variables as follows:
θ(k+1) = θ(k) − βG(k)θ ; (8a)
ω(k+1) = PW(ω(k) + αG(k)ω ) = argmin ω̃∈Ω ‖ω̃ − (ω(k) + αG(k)ω )‖2, (8b)
where the parameters α and β are constant learning rates for primal and dual updates, respectively.
2) Convergence Performance: In this paper, we propose a new metric for convergence analysis:
M(k) := ‖∇J(θ(k)) ‖2 + 2‖ω(k) − ω∗(θ(k))‖2. (9) The first term in (9) measures the convergence of the primal variable θ. As common in nonconvex optimization analysis, ‖∇J(θ)‖2 = 0 indicates that θ is a first-order stationary point
(FOSP) of Problem (4). The second term in (9) measures the convergence of ω(k) to the unique maximizer ω(k)∗ for L(θk, ·). Note that if Problem (4) is unconstrained in dual (i.e., ω ∈ Rd), it follows from Assumption 2 and ‖∇ωL(θ(k),ω∗(θ(k)))‖2 = 0 that M(k) ≥ ‖∇J(θ(k)) ‖2 + 2 L2f ‖∇ωL(θ(k),ω(k))‖2. We now introduce the notion of the approximate first-order stationary points. We say that point {θ,ω} is an -stationary point of function L(θ,ω) if M ≤ is satisfied. Remark. Several important remarks on the connections between our metric M(k) and the conventional convergence metrics in the literature are in order. A conventional convergence metric in the literature for NCSC minimax optimization is ‖∇J(θ(k)) ‖2 (Lin et al., 2020a; Luo et al., 2020; Zhang et al., 2021), which is the first term of M(k) and measures the convergence of the primal variable θ under a given dual variable ω. This is because ‖∇J(θ)‖2 = 0 implies that θ is a FOSP. The novelty in our convergence metric is the second term in M(k), which measures the convergence of ωk to the unique maximizer ω∗k for L(θk, ·). Another conventional convergence metric in the literature of minimizing the empirical MSPBE problem is ‖∇θL(θ,ω)‖2 + ‖∇ωL(θ,ω)‖2 (Tsitsiklis & Van Roy, 1997). Since the nonconvexstrong-concave minimax optimization problem is unconstrained in dual (i.e., ω ∈ Rd), it follows from Lipschitz-smoothness in Assumption 2 and ‖∇ωL(θ(k),ω∗(θ(k)))‖2 = 0 that ‖ω(k)−ω∗(θ(k))‖2 ≥ 1 L2f ‖∇ωL(θ(k),ω(k))‖2. Therefore, the second term in our M(k) (2‖ω(k)−ω∗(θ(k))‖2) is an upper bound of the second term in this conventional metric (‖∇ωL(θ,ω)‖2). Thus, 2‖ω(k) − ω∗(θ(k))‖2 is a stronger metric than ‖∇ωL(θ,ω)‖2 in the sense that an O(1/K) convergence rate under M(k) implies an O(1/K) convergence rate of the conventional metric, but the converse is not true. Moreover, the benefit of using 2‖ω(k) − ω∗(θ(k))‖2 in our M(k) is that its special structure allows us to prove the O(1/K) convergence, while the second term in the conventional metric fails.
With our proposed convergence metric in (9), we have the following convergence result:
Theorem 1. Under Assumptions 1–3, choose step-sizes: α ≤ min{ 14Lf , 2µ 34L2f+2µ 2 } and β ≤ min {
1 4Lf , 1 2(Lf+L2f/µ) , µ 8 √ 17L2f , µ
2α
8 √
34L2f
} . Let q= √ M and S= √ M , it holds that:
1
K K−1∑ k=0 E[M(k)] ≤ 1 K min{1, L2f} [ 16L2f αµ C2 + 2 β C1 ] = O ( 1 K ) ,
where C1 , E[J(θ(0))]− E[J(θ(∗))] and C2 , ( E‖ω∗(θ(0))− ω(0)‖2 ) . Corollary 2. The overall stochastic sample complexity isO( √ Mκ3 −1 +M). Note that κ = Lf/µ denotes the condition number.
Remark. Theorem 1 states that VRPD achieves an O(1/K) convergence rate to an -FOSP. The most challenging part in proving Theorem 1 stems from the fact that one needs to simultaneously evaluate the progresses of the gradient descent in the primal domain and the gradient ascent in the dual domain of the minimax problem.
Toward this end, the nPD-VR method in (Wai et al., 2019) employs ‖∇ωL(θ(k),ω(k))‖2 in their metric to evaluate convergence. However, this approach yields a term F (K) , E[L(θ(0),ω(0)) − L(θ(K),ω(K))] in their convergence upper bound in the form of O(F (K)/K) (cf. Theorem 1, Eq. (26) in (Wai et al., 2019)). Since F (K) depends on K, it is unclear whether or not the nPD-VR method in (Wai et al., 2019) can achieve an O(1/K) convergence rate. This unsatisfactory result motivates us to propose a new metric M(k) in Eq. (9) to evaluate the convergence of our VRPD algorithm. The first part of our convergence metric ‖∇J(θ(k)) ‖2 measures the stationarity gap of the primal variable, while the second part 2‖ω(k) − ω∗(θ(k))‖2 measures the dual optimality gap. Consequently, we bound per-iteration change in J(θ) instead of the function L(θ(k),ω(k)). This helps us avoid the technical limitations of (Wai et al., 2019) and successfully establish the O(1/K) convergence rate, hence resolving an open problem in this area. Remark. VRPD adopts a large O(1) (i.e., constant) step-size compared to the O(1/M) step-size of nPD-VR (Wai et al., 2019), where M is the dataset size. This also induces a faster convergence. Also, VRPD’s estimator uses fresher information from the previous iteration, while VR-STSG (Qiu
Algorithm 2 Adaptive-batch VRPD method (VRPD+). Input: A trajectory of the state-action pairs {s1, a1, s2, a2, · · · , sM , aM , sM+1} generated from a
given policy; step sizes α, β ≥ 0; initialization points θ0 ∈ Θ, ω0 ∈ Rd. Output: (θ(K̃),ω(K̃)), where K̃ is independently and uniformly picked from {1, · · · ,K};
1: for k = 0, 1, 2, · · · ,K − 1 do 2: If mod(k, q) = 0, select Ns indices independently and uniformly from [M ] as in Eq. (10)
and calculate stochastic gradients as in Eq. (11); 3: Otherwise, select S independently and uniformly from [M ]; Compute gradients as in Eq. (7); 4: Perform the primal-dual updates as in Eq. (8). 5: end for
et al., 2020) and nPD-VR (Wai et al., 2019) only use the information from the beginning of q-sized windows. Collectively, VRPD makes a considerably larger progress than state-of-the-art algorithms (Qiu et al., 2020; Wai et al., 2019).
4.2 THE ADAPTIVE-BATCH VRPD METHOD (VRPD+)
Note that VRPD still requires full gradients every q iterations, which may entail a high sample complexity. Upon closer observations, we note that accurate gradient estimation plays an important role only in the later stage of the convergence process. This motivates us to further lower the sample complexity of VRPD by using adaptive batch sizes. Toward this end, we propose an adaptive-batch VRPD method (VRPD+) to lower the sample complexity of the VRPD algorithm in Algorithm 1.
1) Algorithm Description: The full description of VRPD+ is illustrated in Algorithm 2. In VRPD+, our key idea is to use the gradients calculated in the previous loop to adjust the batch size Ns of the next loop. Specifically, VRPD+ chooses Ns in the k-th iteration as:
Ns = min{cγσ2(γ(k))−1, c σ2 −1,M}, (10) where cγ , c > c for certain constant c, M denotes the size of the dataset and σ2 is the variance bound,
and γ(k+1) = ∑k i=(nk−1)q ‖G(i)θ ‖ 2
q is the stochastic gradients calculated in the previous iterations. In VRPD+, for every q iterations, we select Ns samples independently and uniformly from [M ] and compute gradient estimators as follows:
G (k) θ =
1 |Ns| ∑ i∈Ns ∇θLi(θ(k),ω(k)); G(k)ω = 1 |Ns| ∑ i∈Ns ∇θLi(θ(k),ω(k)). (11)
For other iterations, VRPD+ is exactly the same as VRPD. Next, we will theoretically show that such an adaptive batch-size scheme still retains the same convergence rate, while achieving an improved sample complexity. 2) Convergence Performance: For VRPD+, we have the following convergence performance result: Theorem 3. Under Assumptions 1–3, choose step-sizes: α ≤ min{ 14Lf , 2µ 34L2f+2µ 2 } and β ≤
min {
1 4Lf , 1 2(Lf+L2f/µ) , µ 8 √ 17L2f , µ
2α
8 √
34L2f
} . Let q = √ M ,S = √ M and cγ ≥ (288L2f/µ2 + 8) in
VRPD+, where cγ ≥ c for some constant c > 4K + 68Kβµ2 . With constants C1 , E[J(θ (0))] − E[J(θ(∗))] and C2 , ( E[‖ω∗(θ(0))− ω(0)‖2 ] ), it holds that:
1
K K−1∑ k=0 E[M(k)] ≤ 1 Kmin{1, L2f} [ K · 2 + 16L2f αµ C2 + 2 β C1 ] = O ( 1 K ) + 2 .
Corollary 4. The overall stochastic sample complexity is O( √ Mκ3 −1 +M). κ = Lf/µ denotes the condition number.
Remark. From Theorem 3, it can be seen that VRPD+ achieves the same convergence rate as that of VRPD. Since we choose the subsample set Ns instead of full gradient calculation in VRPD+, it achieves a much lower sample complexity compared to VRPD. Additionally, the convergence performance of VRPD+ is affected by the constant K 2 , which is due to the use of the adaptive batch size in each outer-loop of VRPD+. Also, it can be observed that the algorithm convergence rate is affected by the carefully chosen step-sizes α and β, because either a too small or too large step-size may have negative impact on the convergence of the algorithm.
Figure 2: Cartpole-v0 environment.
0 2 10 3 4 10 3 6 10 3 8 10 3
# of grad
10 -4
10 -2
10 0
10 2
L (
, )
VRPD+
VRPD
0 2 10 3 4 10 3 6 10 3 8 10 3
# of grad
10 -4
10 -2
10 0
10 2
|| J (
)| |2
+ ||
L (
, )| |2 VRPD+
VRPD
Figure 3: MountainCar-v0 environment. 0 500 1000 1500 2000 # of grad
-1.4
-1.2
-1
-0.8
-0.6
-0.4
-0.2
0
L (
, ) VRPD+
VRPD
0 500 1000 1500 2000
# of grad
0
0.5
1
1.5
2
|| J (
)| |2
+ ||
L (
, )|
|2
VRPD+
VRPD
Figure 4: Cartpole-v0 environment.
Remark. The proof of Theorem 3 follows from a similar approach to the proof of Theorem 1. The key difference and most challenging part of proving Theorem 3 stem from the relaxation on ‖∇θL(θ(k),ω(k)) −G(k)θ ‖2 and ‖∇ωL(θ(k),ω(k)) −G (k) ω ‖2. Thanks to the bounded variance in Assumption 3 and the selectedNs in Eq. (10), we are able to derive outer-loop bounds for primal and dual gaps, respectively. We refer readers to the Appendix for the details of the complete proof.
5 EXPERIMENTAL RESULTS
In this section, we conduct numerical experiments to verify our theoretical results. We compare our work with the basic stochastic gradient (SG) method (Lin et al., 2020b) and three state-of-the-art algorithms for PE: nPD-VR (Wai et al., 2019), STSG (Qiu et al., 2020) and VR-STSG (Qiu et al., 2020). Due to space limitation, we provide our detailed experiment settings in the Appendix.
200 600 1000 Trajectory length
0.075 0.100 0.125 0.150 0.175 M SE
Mountaincar-v0 Linear Nonlinear
200 600 1000 Trajectory length
0.10
0.15
0.20
0.25
M SE
Cartpole-v0 Linear Nonlinear
Figure 5: MSE comparison with 10 trials.
Numerical Results: We set the constant learning rates α = 10−3, β = 10−1, mini-batch size q = d √ Me, constant c = 32 and solution accuracy
= 10−3. First, we compare the loss value and gradient norm performance based on MountainCar-v0 and Cartpole-v0 with nPD-VR, SG, STSG, and VR-STSG in Figs. 1 and 2. We set the constraintW = [0, 10]n and initialize all algorithms at the same point, which is generated randomly from the normal distribution. We can see that VR-STSG and nPD-VR slowly converge after 40 epochs, while STSG and SG fail to converge. VRPD converges faster than all the other algorithms with the same step-size values. As for Cartpole-v0, we clearly see a trend of approaching zero-loss with VRPD. These results are consistent with our theoretical result that one can use a relatively large step-size with VRPD, which leads to faster convergence. Also, we compare the sample complexity of VRPD and VRPD+ in MountainCar-v0 and Cartpole-v0, and the results are shown in in Figs. 3 and 4, respectively. We can see
that VRPD+ converges to the same level with much fewer samples than VRPD does. Next, we compared the mean squared error(MSE) between the ground truth value function and the estimated value function over 10 independent runs with linear approximation and nonlinear approximation. In Fig. 5, with the same amount of parameter size, nonlinear approximation always achieves smaller MSE than linear approximation (Du et al., 2017). Further experiments on the performance of J(θ) are shown in the supplementary material.
6 CONCLUSION
In this paper, we proposed and analyzed two algorithms called VRPD and VRPD+ for policy evaluation with nonlinear approximation. The VRPD algorithm is based on a simple single-timescale framework by utilizing variance reduction techniques. The VRPD algorithm allows the use of constant step-sizes and achieves an O(1/K) convergence rate. The VRPD+ algorithm improves VRPD by further applying an adaptive batch size based on historical stochastic gradient information. Our experimental results also confirmed our theoretical findings in convergence and sample complexity. | 1. What is the focus of the paper regarding policy evaluation in reinforcement learning?
2. What are the strengths and weaknesses of the proposed method, particularly in its connection to the MSPBE objective?
3. Do you have any concerns about the explanation of the algorithm's update and its relation to existing variance reduction methods?
4. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content, especially regarding typos and unprecise claims? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper considers the policy evaluation problem of reinforcement learning. Based on a primal-dual formulation of the objective, the authors apply a variance reduction technique to solve the problem under the nonlinear function approximation. They claim that the algorithm attains
O
(
1
K
)
iteration complexity.
Strengths And Weaknesses
Strength:
The paper proposes a variance reduction method for solving a problem that seems to have a connection with the policy evaluation problem. They obtain the
O
(
1
K
)
iteration complexity.
Weakness:
The considered finite-sum objective differs from the original MSPBE objective. In fact, the objective cannot even be viewed as an unbiased substitute, as the sample from a trajectory doesn’t follow the stationary distribution. Such an objective is not justified.
The algorithm’s update is poorly explained. From my view, it’s just SPIDER’s update [1], which is a popular variance reduction approach. However, there is no discussion with existing variance reduction methods.
The projection step in (8) is not explained.
There are too many typos and unprecise claims, making the paper hard to read. Here are examples:
The last
g
θ
(
s
)
in (1) should not have a transpose.
The strongly-concavity definition in assumption 4.2 is wrong; the function should also be additionally concave.
The L_f-smoothness definition in assumption 4.3 is unprecise; theta and omega should be in the same norm.
[1] Spider: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator.
Clarity, Quality, Novelty And Reproducibility
The paper is poorly organized and hard to follow. There are too many unjustified and misleading claims. |
ICLR | Title
On $\mathcal{O}(1/K)$ Convergence and Low Sample Complexity for Single-Timescale Policy Evaluation with Nonlinear Function Approximation
Abstract
Learning an accurate value function for a given policy is a critical step in solving reinforcement learning (RL) problems. So far, however, the convergence speed and sample complexity performances of most existing policy evaluation algorithms remain unsatisfactory, particularly with non-linear function approximation. This challenge motivates us to develop a new variance-reduced primal-dual method (VRPD) that is able to achieve a fast convergence speed for RL policy evaluation with nonlinear function approximation. To lower the high sample complexity limitation of variance-reduced approaches (due to the periodic full gradient evaluation with all training data), we further propose an enhanced VRPD method with an adaptive-batch adjustment (VRPD). The main features of VRPD include: i) VRPD allows the use of constant step sizes and achieves the O(1/K) convergence rate to the first-order stationary points of non-convex policy evaluation problems; ii) VRPD is a generic single-timescale algorithm that is also applicable for solving a large class of non-convex strongly-concave minimax optimization problems; iii) By adaptively adjusting the batch size via historical stochastic gradient information, VRPD is more sample-efficient empirically without loss of theoretical convergence rate. Our extensive numerical experiments verify our theoretical findings and showcase the high efficiency of the proposed VRPD and VRPD algorithms compared with the state-of-the-art methods.
1 INTRODUCTION
In recent years, advances in reinforcement learning (RL) have achieved enormous successes in a large number of areas, including healthcare (Petersen et al., 2019; Raghu et al., 2017b), financial recommendation (Theocharous et al., 2015), resources management (Mao et al., 2016; Tesauro et al., 2006) and robotics (Kober et al., 2013; Levine et al., 2016; Raghu et al., 2017a), to name just a few. In RL applications, an agent interacts with an environment and repeats the tasks of observing the current state, performing a policy-based action, receiving a reward, and transition to the next state. A key step in many RL algorithms is the policy evaluation (PE) problem, which aims to learn the value function that estimates the expected long-term accumulative reward for a given policy. Value functions not only explicitly provide the agent’s accumulative rewards, but could also be utilized to update the current policy so that the agent can visit valuable states more frequently (Bertsekas & Tsitsiklis, 1995; Lagoudakis & Parr, 2003). In RL policy evaluation, two of the most important performance metrics are convergence rate and sample complexity. First, since policy evaluation is a subroutine of an overall RL task, developing fast-converging policy evaluation algorithms is of critical importance to the overall efficiency of RL. Second, due to the challenges in collecting a large number of training samples (trajectories of state-action pairs) for policy evaluations in RL, reducing the number of samples (i.e., sample complexity) can significantly alleviate the burden of data collection for solving policy evaluation problems. These two important aspects motivate us to pursue a fast-converging policy evaluation algorithm with a low sample-complexity in this paper.
Among various algorithms for policy evaluation, one of the simplest and most effective methods is the temporal difference (TD) learning approach (Sutton, 1988). Instead of focusing on the predicted and actual outcomes, the key idea of the TD learning is to make the difference between temporally successive predictions small. Specifically, the TD learning approach learns the value function by
using the Bellman equation to bootstrap from the current estimated value function. To date, there have been many algorithms proposed within the family of TD learning (Dann et al., 2014). However, most of these methods suffer from either a unstable convergence performance, (e.g., TD(λ) (Sutton, 1988) for off-policy training) or a high computational complexity (e.g., least-squares temporal difference (LSTD) (Boyan, 2002; Bradtke & Barto, 1996)) in training with massive features. The limitation of these early attempts is largely due to the fact that they do not leverage the gradient-oracle in policy evaluation. Thus, in recent years, gradient-based policy evaluation algorithms have become increasingly prevalent. However, the design of efficient gradient-based policy evaluation algorithm is a non-trivial task. On one hand, as an RL task becomes more sophisticated, it is more appropriate to utilize nonlinear function approximation (e.g., deep neural network (DNN)) to model the value function. However, when working with nonlinear DNN models, the convergence performance of the conventional single-timescale TD algorithms may not be guaranteed (Tsitsiklis & Van Roy, 1997). To address this issue, some convergent two-timescale algorithms (Bhatnagar et al., 2009; Chung et al., 2018) have been proposed at the expense of higher implementation complexity. On the other hand, modern policy evaluation tasks could involve a large amount of state transition data. To perform policy evaluation, algorithms typically need to calculate full gradients that require all training data (e.g., gradient temporal difference (GTD) (Sutton et al., 2008) and TD with gradient correction (TDC) (Sutton et al., 2009b)), which entails a high sample complexity. So far, existing works on PE are either focus on linear approximation (GTD2 (Sutton et al., 2009b), PDBG (Du et al., 2017), SVRG (Du et al., 2017), SAGA (Du et al., 2017)) or have such a slower convergence performance (STSG (Qiu et al., 2020), VR-STSG (Qiu et al., 2020), nPD-VR (Wai et al., 2019)) (see detailed discussions in Section. 2). In light of the above limitations, in this paper, we ask the following question: Could we develop an efficient single-timescale gradient-based algorithm for policy evaluation based on nonlinear function approximation?
In this paper, we give an affirmative answer to the above question. Specifically, we propose an efficient gradient-based variance-reduced primal-dual algorithm (VRPD) to tackle the policy evaluation problem with nonlinear function approximation, which we recast as a minimax optimization problem. Our VRPD algorithm admits a simple and elegant single-timescale algorithmic structure. Then, we further enhance VRPD by proposing VRPD+, which uses adaptive batch sizes to relax the periodic full gradient evaluation to further reduce sample complexity. The main contribution of this paper is that our proposed algorithms achieve an O(1/K) convergence rate (K is the number of iterations) with constant step-sizes for policy evaluation with nonlinear function approximation, which is the best-known result in the literature thus far. Our main results are highlighted as follows:
• By utilizing a variance reduction technique, our VRPD algorithm allows constant step-sizes and enjoys a low sample complexity. We show that, under mild assumptions and appropriate parameter choices, VRPD achieves an O(1/K) convergence rate to the first-order stationary point of a class of nonconvex-strongly-concave(NCSC) minimax problems, which is the best-known result in the literature. To achieve this result, our convergence rate analysis introduces new proof techniques and resolves an open question and clarifies an ambiguity in the state-of-the-art convergence analysis of VR-based policy evaluation methods (see 2nd paragraph in Section 2.1 for more discussions).
• VRPD+ significantly improves the sample complexity of the VRPD algorithm for policy evaluation with massive datasets. Our VRPD+ (adaptive-batch VRPD) algorithm incorporates historical information along the optimization path, but does not involve backtracking and condition verification. We show that our VRPD+ algorithm significantly reduces the number of samples and the computation loads of gradients, thanks to our proposed adaptive batch size technique that is able to avoid full gradient evaluation.
• Our extensive experimental results also confirm that our algorithms outperform the state-of-theart gradient-based policy evaluation algorithms, and our VRPD+ can further reduce the sample complexity compared to the VRPD algorithm. It is worth noting that, although the focus of our work is on RL policy evaluation, our algorithmic design and proof techniques contribute to the area of minimax optimization and could be of independent theoretical interest.
2 RELATED WORK
1) TD Learning with Function Approximation for Policy Evaluation: TD learning with function approximation plays a vital role in policy evaluation for RL. The key idea of TD learning is to
minimize the Bellman error for approximating the value function. So far, most existing TD learning algorithms with theoretical guarantees focus on the linear setting (e.g., (Sutton et al., 2009a; Srikant & Ying, 2019; Xu et al., 2020b; Stankovic & Stankovic, 2016; Touati et al., 2018)). Doan et al. (2019), Liu et al. (2015), Macua et al. (2014), and Zhang & Xiao (2019) provided a finite-time analysis for the proposed distributed TD(0) and showed that the convergence rate of their algorithm is O(1/K). It was shown in Du et al. (2017) that policy evaluation with linear function approximation by TD(0) can be formulated as a strongly convex-concave or convex-concave problem, and can be solved by a primal-dual method with a linear convergence rate. However, the linearity assumption cannot be applied in a wide range of policy evaluations with nonlinear models. TD learning with nonlinear (smooth) function approximation is far more complex. Maei et al. (2009) was among the first to propose a general framework for minimizing the generalized mean-squared projected Bellman error (MSPBE) with smooth and nonlinear value functions. However, they adopted twotimescale step-sizes but only obtained a slow convergence performance. Other TD methods with nonlinear function approximations for policy evaluations include (Wang et al., 2017; 2016). Qiu et al. (2020) also investigated nonlinear TD learning and proposed two single-timescale first-order stochastic algorithms. However, the convergence rate of their STSG and VR-STSG are O(1/K1/4) and O(1/K1/3), while our VRPD algorithm achieves a much faster O(1/K) convergence rate. In policy evaluation with non-linear function approximation, the state-of-the-art and the most related work to ours is (Wai et al., 2019), which showed that minimizing the generalized MSPBE problem is equivalent to solving a non-convex-strongly-concave (NCSC) minimax optimization problem via the Fenchel’s duality. However, their best convergence results only hold when the step-size is O( 1M ), where M is the size of the dataset. This is problematic for modern RL problems with a large state-action transition dataset. More importantly, although their convergence theorem appears to have a 1K factor (K being the total number of iterations), their convergence rate bound is in the form of F (K)+Constant1 K·Constant2 (cf. Theorem 1, Eq. (26) in Wai et al. (2019)). Notably, the F
(K) term in the denominator in Eq. (26) inherently depends on the primal and dual values θ(K) and ω(K) in the K-th iteration, respectively. It is unclear whether ω(K) can be bounded in (Wai et al., 2019), hence leading to an ambiguity in guaranteeing an O(1/K) convergence rate. Thus, whether an O(1/K) convergence rate is achievable in single-timescale policy evaluation with nonlinear function approximation and constant step-sizes remains an open question thus far. The key contribution and novelty in this paper is that we resolve the above open question by proposing two new algorithms, both achieving anO(1/K) convergence rate. To establish this result, we propose a new convergence metric (cf. Eq. (9) in Section 4.1), which necessitates new proof techniques and analysis. For easy comparisons, we summarize our algorithms and the related works in Table 1.
2) Relations with NCSC Minimax Optimization: Although the focus of our paper is on RL policy evaluation, our algorithmic techniques are also related to the area of NCSC minimax optimization due to the primal-dual MSPBE formulation (cf. Eq. (2) in Section 3). Early attempts in (Nouiehed et al., 2019; Lin et al., 2020b) developed gradient descent-ascent algorithms to solve the NCSC minimax problems. However, these methods suffer from a high sample complexity and slow convergence rate. To overcome this limitation, two variance-reduction algorithms named SREDA (Luo et al., 2020) are proposed for solving NCSC minimax problems, which shares some similarity to our work.
Later, Xu et al. (2020a) enhanced SREDA to allow bigger step-sizes. However, our algorithms still differ from SREDA in the following key aspects: (i) Our algorithms are single-timescale algorithms, which are much easier to implement. In comparison, SREDA is a two-timescale algorithm, where solving an inner concave maximization subproblem is needed. Thus, to a certain extent, SREDA can be viewed as a triple-loop structure, and hence the computational complexity of SREDA is higher than ours; (ii) In the initialization stage, SREDA uses the PiSARAH, which is a subroutine that aims to help the SREDA algorithm achieve the desired accuracy at the initialization step and can be seen as an additional step to solve an inner concave maximization subproblem. Thus, SREDA has a higher computation cost than our paper. (iii) The number of parameters in SREDA are far more than ours and it requires the knowledge of the condition number to set the algorithm’s parameters for good convergence performance. By contrast, our algorithms only require step-sizes α and β to be sufficiently small, which is easier to tune in practice. (iv) SREDA does not provide an explicit convergence rate in their paper (it is unclear what their convergence rate is from their proof either). Yet, we show that our VRPD in theory has a lower sample complexity than that of SREDA.
Another related work in terms of NCSC minimax optimization is (Zhang et al., 2021), which also provided sample complexity upper and lower bounds. However, there remains a gap between the sample complexity lower and upper bounds in (Zhang et al., 2021). By contrast, the sample complexity of our VRPD algorithm matches the lower bound O(M + √ M −2) in (Zhang et al., 2021), which is the first in the literature. Furthermore, the algorithm contains an inner minimax subproblem (cf. Line 6 of Algorithm 1 in Zhang et al. (2021)). Solving such a subproblems in the inner loop incurs high computational costs. Due to this reason, the algorithm in (Zhang et al., 2021) had to settle for an inexact solution, which hurts the convergence performance in practice. In contrast, our algorithm does not have such a limitation.
3 PRELIMINARIES AND PROBLEM STATEMENT
We start from introducing the necessary background of reinforcement learning, with a focus on the policy evaluation problem based on nonlinear function approximation.
1) Policy Evaluation with Nonlinear Approximation: RL problems are formulated using the Markov decision process (MDP) framework defined by a five-tuple {S,A, P, γ,R}, where S denotes the state space and A is the action space; P : S ×A → S represents the transition function, which specifies the probability of one state transitioning to another after taking an action; R denotes the space of the received reward upon taking an action a ∈ A under state s ∈ S (in this paper, we assume that the state and action spaces are finite, but the numbers of states and actions could be large); and γ ∈ [0, 1) is a time-discount factor. For RL problems over an infinite discrete-time horizon {t ∈ N}, the learning agent executes an action at according to the state st and some policy π : S → A. The system then transitions into a new random state st+1 in the next time-slot. Also, the agent receives a random reward Rπ(st, at). The trajectory generated by a policy π is a sequence of state-action pairs denoted as {s1,a1,s2,a2,. . .}. The goal of the agent is to learn an optimal policy π∗ to maximize the long-term discounted total reward. Specifically, for a policy π (could be a randomized policy), the expected reward received by the agent at state s in any given time-slot can be computed as Rπ(st) = Ea∼π(·|s) [ Rπ(st, a) ] . The value
function V π (s0) = E [ ∑∞ t=0 γ
tR (st) | s0, π] indicates the long-term discounted reward of policy π over an infinite horizon with the initial state at s0 ∈ S. Also, the Bellman equation implies that V π(·): V (s)=T πV (s), where T πf(s) , E[Rπ(s) + γf(s′)|a ∼ π(·|s), s′ ∼ P (·|s, a)] denotes the Bellman operator. In RL, the agent’s goal is to determine an optimal policy π∗ that maximizes the value function V π(s) from any initial state s.
However, the first obstacle in solving RL problems stems from evaluating V π(·) for a given π since P (·|s, a) is unknown. Moreover, it is often infeasible to store V π(s) since the state space S could be large. To address these challenges, one popular approach in RL is to approximate V π(·) using a family of parametric and smooth functions in the form of V π(·) ≈ Vθπ (·), where θπ ∈ Rd is a d-dimensional parameter vector. Here, Θ is a compact subspace. For notational simplicity, we will omit all superscripts “π” whenever the policy π is clear from the context. In this paper, we focus on nonlinear function approximation, i.e., Vθ(·) : S → R is a nonlinear function with respect to (w.r.t.) θ. For example, Vθ(·) could be based on a θ-parameterized nonlinear DNN. We assume that the gradient and Hessian of Vθ(·) exist and are denoted as: gθ(s) := ∇θVθ(s) ∈ Rd, Hθ(s) := ∇2θVθ(s) ∈ Rd×d.
Our goal is to find the optimal parameter θ∗ ∈ Rd that minimizes the error between Vθ∗(·) and V (·). This problem can be formulated as minimizing the mean-squared projected Bellman error (MSPBE) of the value function as follows (Liu et al., 2018):
MSPBE(θ) := 1
2 ∥∥Es∼Dπ(·)[(T πVθ(s)−Vθ(s))∇θVθ(s)>]∥∥2D−1 = max ω∈Rd ( − 1 2 Es∼Dπ(·)[(ω>gθ(s))2] + 〈ω,Es∼Dπ(·) [ (T πVθ(s)−Vθ(s))gθ(s) ] 〉 ) , (1)
where Dπ(·) is the stationary distribution of under policy π andD = Es∼Dπ [gθ(s)g>θ (s)] ∈ Rd×d. 2) Primal-Dual Optimization for MSPBE: It is shown in (Liu et al., 2018) (cf. Proposition 1) that minimizing MSPBE(θ) in (1) is equivalent to solving a primal-dual minimax optimization problem:
min θ∈Rd max ω∈Rd L(θ,ω), (2)
where L(θ,ω) , 〈ω,Es∼Dπ(·) [ (T πVθ(s)−Vθ(s))gθ(s)> ] 〉− 12Es∼Dπ(·)[(ω >gθ(s)) 2]. Since the distribution Dπ(·) is unknown and the expectation cannot be evaluated directly, one often considers the following empirical minimax problem by replacing the expectation in L(θ,ω) with a finite sample average approximation in the stochastic objective function based on an M -step trajectory, i.e., minθ∈Rd maxω∈Rd L(θ,ω)=minθ∈Rd maxω∈Rd 1M ∑M i=1 Li(θ,ω), where
Li(θ,ω) := 〈ω,[R(si, ai, si+1) + γVθ(si+1)− Vθ(si)]× gθ(si)〉 − 1
2 (ω>gθ(si)) 2. (3)
Solving the above empirical minimax problem for MSPBE constitutes the rest of this paper.
4 SOLUTION APPROACH
As mentioned in Section 3, based on an M -step trajectory {s1, a1 · · · , sM , aM , sM+1} generated by some policy π, our goal is to solve the empirical primal-dual and finite-sum optimization problem:
min θ∈Rd max ω∈W
1
M M∑ i=1 Li(θ,ω) = min θ∈Rd max ω∈W L(θ,ω), (4)
where W is assumed to be a convex constrained set (Problem (4) becomes Problem (2) when W = Rd). In our Appendix, we also discussed the min-max problem while θ ∈ Θ. Θ is a convex constrained set. See details in Appendix. 12. Note that Problem (4) could be non-convex (e.g., DNN-based nonlinear approximation). Let J(θ) , maxω∈W L(θ,ω). Then, we can equivalently rewrite Problem (4) as follows: minθ∈Rd maxω∈W L(θ,ω) = minθ∈Rd J(θ). Note from (3) that L(θ,ω) is strongly concave w.r.t. ω, which guarantees the existence and uniqueness of the solution to the problem maxω∈W L(θ,ω),∀θ ∈ Rd. Then, given θ ∈ Rd, we define the following notation: ω∗(θ) := argmax
ω∈W L(θ,ω). Thus, J(θ) can be further written as:
J(θ) = L(θ,ω∗) = max ω∈W L(θ,ω). (5)
The function J(θ) can be viewed as a finite empirical version of MSPBE. We aim to minimize J(θ) by finding the stationary point of L(θ,ω). To simplify the notaion, we use ω∗ to denote ω∗(θ). Note that ifD in Eq. (1) is positive definite, Problem (4) is strongly concave in ω, but non-convex in θ in general due to the non-convexity of function Vθ . Thus, the stated primal-dual objective function is a NCSC optimization problem. In this paper, we make the following assumptions: Assumption 1 (µ-Strongly Concavity). The differentiable function L(θ,ω) is µ-strongly concave in ω: if L(θ,ω) ≤ L(θ,ω′) +∇ωL(θ,ω′)>(ω − ω′)− µ2 ‖ω − ω
′‖2, ∀ω,ω′ ∈ Rd, µ > 0 and any fixed θ ∈ Rd. The above mentioned condition is equivalent to : ‖∇ωL(θ,ω) −∇ωL(θ,ω′)‖ ≥ µ‖ω − ω′‖, ∀ω,ω′ ∈ Rd. Similar proofs can be found in Lemma 2 and 3 in Zhou (2018). Assumption 2 (Lf -Smoothness). For i = 1, 2, . . . ,M , both gradient ∇θLi(θ,ω) and ∇ωLi(θ,ω) are Lf -smooth. That is, for all θ,θ′ ∈ Rd and ω,ω′ ∈ Rd, there exists a constant Lf > 0 such that ‖∇Li(θ,ω)−∇Li(θ′,ω′)‖ ≤ Lf ( ‖θ − θ′‖+ ‖ω − ω′‖ ) .
Algorithm 1 The Variance-Reduced Primal-Dual Stochastic Gradient Method (VRPD). Input: An M -step trajectory of the state-action pairs {s1, a1, s2, a2, · · · , sM , aM , sM+1} generated
from a given policy; step sizes α, β ≥ 0; initialization points θ0 ∈ Rd, ω0 ∈ W . Output: (θ(K̃),ω(K̃)), where K̃ is independently and uniformly picked from {1, · · · ,K};
1: for k = 0, 1, 2, · · · ,K − 1 do 2: If mod(k, q) = 0, compute full gradients G(k)θ , G (k) ω as in Eq. (6). 3: Otherwise, select S samples independently and uniformly from [M ], and compute gradients as in Eq. (7). 4: Perform the primal-dual updates to obtain the next iterate θ(k+1),ω(k+1) as in Eq. (8). 5: end for
Assumption 3 (Bounded Variance). There exists a constant σ > 0 such that for all θ ∈ Rd,ω ∈ Rd, 1 M ∑M i=1 ‖∇θLi(θ,ω)−∇θL(θ,ω)‖2 ≤ σ2 and 1 M ∑M i=1 ‖∇ωLi(θ,ω)−∇ωL(θ,ω)‖2 ≤ σ2.
In the above assumptions, Assumption 1 is satisfied if the number of samples M is sufficiently large and coupling with the fact that the matrixD is positive definite. To see that, note that µ=λmin (D) > 0, where D = Es [ ∇θVθ(s)∇θVθ(s)> ] ∈ Rd×d and D tends to be full-rank as M increases. Thus, as soon as we find a µ > 0 when M is sufficiently large, this µ is independent of M as M continues to increase. Assumption 2 is standard in the optimization literature. Assumption 3 is also commonly adopted for proving convergence results of SGD- and VR-based algorithms, or algorithms that draw a mini-batch of samples instead of all samples. Assumption 3 is guaranteed to hold under the compact set condition and common for stochastic approximation algorithms for minimax optimization (Qiu et al., 2020; Lin et al., 2020a). Assumptions 1–3 are also general assumptions often used in temporal difference (TD) problems (see, e.g., (Qiu et al., 2020; Wai et al., 2019)). With these assumptions, we are now in a position to present our algorithms and their convergence performance results.
4.1 THE VARIANCE-REDUCED PRIMAL-DUAL METHOD
In this section, we first present the variance-reduced primal-dual (VRPD) algorithm for solving policy evaluation problems, followed by the theoretical convergence results. Due to space limitation, we provide a proof sketch in the main text and relegate the proof to the supplementary material.
1) Algorithm Description: The full description of VRPD is illustrated in Algorithm 1. In VRPD, for every q iterations, the algorithm calculates the full gradients as follows:
G (k) θ =
1 |M | ∑ i∈M ∇θLi(θ(k),ω(k)); G(k)ω = 1 |M | ∑ i∈M ∇ωLi(θ(k),ω(k)). (6)
In all other iterations, VRPD selects a batch of samples S and computes variance-reduced gradient estimators as:
G (k) θ = 1 |S| ∑ i∈S ( ∇θLi(θ(k),ω(k))−∇θLi(θ(k−1),ω(k−1)) +G(k−1)θ ) ; (7a)
G(k)ω = 1 |S| ∑ i∈S ( ∇ωLi(θ(k),ω(k))−∇ωLi(θ(k−1),ω(k−1)) +G(k−1)ω ) . (7b)
The estimators in (7) are constructed iteratively based on the previous update information ∇θLi(θ(k−1),ω(k−1)) (resp. (∇ωLi(θ(k−1),ω(k−1)) ) and G(k−1)θ (resp. G (k−1) ω ). VRPD updates the primal and dual variables as follows:
θ(k+1) = θ(k) − βG(k)θ ; (8a)
ω(k+1) = PW(ω(k) + αG(k)ω ) = argmin ω̃∈Ω ‖ω̃ − (ω(k) + αG(k)ω )‖2, (8b)
where the parameters α and β are constant learning rates for primal and dual updates, respectively.
2) Convergence Performance: In this paper, we propose a new metric for convergence analysis:
M(k) := ‖∇J(θ(k)) ‖2 + 2‖ω(k) − ω∗(θ(k))‖2. (9) The first term in (9) measures the convergence of the primal variable θ. As common in nonconvex optimization analysis, ‖∇J(θ)‖2 = 0 indicates that θ is a first-order stationary point
(FOSP) of Problem (4). The second term in (9) measures the convergence of ω(k) to the unique maximizer ω(k)∗ for L(θk, ·). Note that if Problem (4) is unconstrained in dual (i.e., ω ∈ Rd), it follows from Assumption 2 and ‖∇ωL(θ(k),ω∗(θ(k)))‖2 = 0 that M(k) ≥ ‖∇J(θ(k)) ‖2 + 2 L2f ‖∇ωL(θ(k),ω(k))‖2. We now introduce the notion of the approximate first-order stationary points. We say that point {θ,ω} is an -stationary point of function L(θ,ω) if M ≤ is satisfied. Remark. Several important remarks on the connections between our metric M(k) and the conventional convergence metrics in the literature are in order. A conventional convergence metric in the literature for NCSC minimax optimization is ‖∇J(θ(k)) ‖2 (Lin et al., 2020a; Luo et al., 2020; Zhang et al., 2021), which is the first term of M(k) and measures the convergence of the primal variable θ under a given dual variable ω. This is because ‖∇J(θ)‖2 = 0 implies that θ is a FOSP. The novelty in our convergence metric is the second term in M(k), which measures the convergence of ωk to the unique maximizer ω∗k for L(θk, ·). Another conventional convergence metric in the literature of minimizing the empirical MSPBE problem is ‖∇θL(θ,ω)‖2 + ‖∇ωL(θ,ω)‖2 (Tsitsiklis & Van Roy, 1997). Since the nonconvexstrong-concave minimax optimization problem is unconstrained in dual (i.e., ω ∈ Rd), it follows from Lipschitz-smoothness in Assumption 2 and ‖∇ωL(θ(k),ω∗(θ(k)))‖2 = 0 that ‖ω(k)−ω∗(θ(k))‖2 ≥ 1 L2f ‖∇ωL(θ(k),ω(k))‖2. Therefore, the second term in our M(k) (2‖ω(k)−ω∗(θ(k))‖2) is an upper bound of the second term in this conventional metric (‖∇ωL(θ,ω)‖2). Thus, 2‖ω(k) − ω∗(θ(k))‖2 is a stronger metric than ‖∇ωL(θ,ω)‖2 in the sense that an O(1/K) convergence rate under M(k) implies an O(1/K) convergence rate of the conventional metric, but the converse is not true. Moreover, the benefit of using 2‖ω(k) − ω∗(θ(k))‖2 in our M(k) is that its special structure allows us to prove the O(1/K) convergence, while the second term in the conventional metric fails.
With our proposed convergence metric in (9), we have the following convergence result:
Theorem 1. Under Assumptions 1–3, choose step-sizes: α ≤ min{ 14Lf , 2µ 34L2f+2µ 2 } and β ≤ min {
1 4Lf , 1 2(Lf+L2f/µ) , µ 8 √ 17L2f , µ
2α
8 √
34L2f
} . Let q= √ M and S= √ M , it holds that:
1
K K−1∑ k=0 E[M(k)] ≤ 1 K min{1, L2f} [ 16L2f αµ C2 + 2 β C1 ] = O ( 1 K ) ,
where C1 , E[J(θ(0))]− E[J(θ(∗))] and C2 , ( E‖ω∗(θ(0))− ω(0)‖2 ) . Corollary 2. The overall stochastic sample complexity isO( √ Mκ3 −1 +M). Note that κ = Lf/µ denotes the condition number.
Remark. Theorem 1 states that VRPD achieves an O(1/K) convergence rate to an -FOSP. The most challenging part in proving Theorem 1 stems from the fact that one needs to simultaneously evaluate the progresses of the gradient descent in the primal domain and the gradient ascent in the dual domain of the minimax problem.
Toward this end, the nPD-VR method in (Wai et al., 2019) employs ‖∇ωL(θ(k),ω(k))‖2 in their metric to evaluate convergence. However, this approach yields a term F (K) , E[L(θ(0),ω(0)) − L(θ(K),ω(K))] in their convergence upper bound in the form of O(F (K)/K) (cf. Theorem 1, Eq. (26) in (Wai et al., 2019)). Since F (K) depends on K, it is unclear whether or not the nPD-VR method in (Wai et al., 2019) can achieve an O(1/K) convergence rate. This unsatisfactory result motivates us to propose a new metric M(k) in Eq. (9) to evaluate the convergence of our VRPD algorithm. The first part of our convergence metric ‖∇J(θ(k)) ‖2 measures the stationarity gap of the primal variable, while the second part 2‖ω(k) − ω∗(θ(k))‖2 measures the dual optimality gap. Consequently, we bound per-iteration change in J(θ) instead of the function L(θ(k),ω(k)). This helps us avoid the technical limitations of (Wai et al., 2019) and successfully establish the O(1/K) convergence rate, hence resolving an open problem in this area. Remark. VRPD adopts a large O(1) (i.e., constant) step-size compared to the O(1/M) step-size of nPD-VR (Wai et al., 2019), where M is the dataset size. This also induces a faster convergence. Also, VRPD’s estimator uses fresher information from the previous iteration, while VR-STSG (Qiu
Algorithm 2 Adaptive-batch VRPD method (VRPD+). Input: A trajectory of the state-action pairs {s1, a1, s2, a2, · · · , sM , aM , sM+1} generated from a
given policy; step sizes α, β ≥ 0; initialization points θ0 ∈ Θ, ω0 ∈ Rd. Output: (θ(K̃),ω(K̃)), where K̃ is independently and uniformly picked from {1, · · · ,K};
1: for k = 0, 1, 2, · · · ,K − 1 do 2: If mod(k, q) = 0, select Ns indices independently and uniformly from [M ] as in Eq. (10)
and calculate stochastic gradients as in Eq. (11); 3: Otherwise, select S independently and uniformly from [M ]; Compute gradients as in Eq. (7); 4: Perform the primal-dual updates as in Eq. (8). 5: end for
et al., 2020) and nPD-VR (Wai et al., 2019) only use the information from the beginning of q-sized windows. Collectively, VRPD makes a considerably larger progress than state-of-the-art algorithms (Qiu et al., 2020; Wai et al., 2019).
4.2 THE ADAPTIVE-BATCH VRPD METHOD (VRPD+)
Note that VRPD still requires full gradients every q iterations, which may entail a high sample complexity. Upon closer observations, we note that accurate gradient estimation plays an important role only in the later stage of the convergence process. This motivates us to further lower the sample complexity of VRPD by using adaptive batch sizes. Toward this end, we propose an adaptive-batch VRPD method (VRPD+) to lower the sample complexity of the VRPD algorithm in Algorithm 1.
1) Algorithm Description: The full description of VRPD+ is illustrated in Algorithm 2. In VRPD+, our key idea is to use the gradients calculated in the previous loop to adjust the batch size Ns of the next loop. Specifically, VRPD+ chooses Ns in the k-th iteration as:
Ns = min{cγσ2(γ(k))−1, c σ2 −1,M}, (10) where cγ , c > c for certain constant c, M denotes the size of the dataset and σ2 is the variance bound,
and γ(k+1) = ∑k i=(nk−1)q ‖G(i)θ ‖ 2
q is the stochastic gradients calculated in the previous iterations. In VRPD+, for every q iterations, we select Ns samples independently and uniformly from [M ] and compute gradient estimators as follows:
G (k) θ =
1 |Ns| ∑ i∈Ns ∇θLi(θ(k),ω(k)); G(k)ω = 1 |Ns| ∑ i∈Ns ∇θLi(θ(k),ω(k)). (11)
For other iterations, VRPD+ is exactly the same as VRPD. Next, we will theoretically show that such an adaptive batch-size scheme still retains the same convergence rate, while achieving an improved sample complexity. 2) Convergence Performance: For VRPD+, we have the following convergence performance result: Theorem 3. Under Assumptions 1–3, choose step-sizes: α ≤ min{ 14Lf , 2µ 34L2f+2µ 2 } and β ≤
min {
1 4Lf , 1 2(Lf+L2f/µ) , µ 8 √ 17L2f , µ
2α
8 √
34L2f
} . Let q = √ M ,S = √ M and cγ ≥ (288L2f/µ2 + 8) in
VRPD+, where cγ ≥ c for some constant c > 4K + 68Kβµ2 . With constants C1 , E[J(θ (0))] − E[J(θ(∗))] and C2 , ( E[‖ω∗(θ(0))− ω(0)‖2 ] ), it holds that:
1
K K−1∑ k=0 E[M(k)] ≤ 1 Kmin{1, L2f} [ K · 2 + 16L2f αµ C2 + 2 β C1 ] = O ( 1 K ) + 2 .
Corollary 4. The overall stochastic sample complexity is O( √ Mκ3 −1 +M). κ = Lf/µ denotes the condition number.
Remark. From Theorem 3, it can be seen that VRPD+ achieves the same convergence rate as that of VRPD. Since we choose the subsample set Ns instead of full gradient calculation in VRPD+, it achieves a much lower sample complexity compared to VRPD. Additionally, the convergence performance of VRPD+ is affected by the constant K 2 , which is due to the use of the adaptive batch size in each outer-loop of VRPD+. Also, it can be observed that the algorithm convergence rate is affected by the carefully chosen step-sizes α and β, because either a too small or too large step-size may have negative impact on the convergence of the algorithm.
Figure 2: Cartpole-v0 environment.
0 2 10 3 4 10 3 6 10 3 8 10 3
# of grad
10 -4
10 -2
10 0
10 2
L (
, )
VRPD+
VRPD
0 2 10 3 4 10 3 6 10 3 8 10 3
# of grad
10 -4
10 -2
10 0
10 2
|| J (
)| |2
+ ||
L (
, )| |2 VRPD+
VRPD
Figure 3: MountainCar-v0 environment. 0 500 1000 1500 2000 # of grad
-1.4
-1.2
-1
-0.8
-0.6
-0.4
-0.2
0
L (
, ) VRPD+
VRPD
0 500 1000 1500 2000
# of grad
0
0.5
1
1.5
2
|| J (
)| |2
+ ||
L (
, )|
|2
VRPD+
VRPD
Figure 4: Cartpole-v0 environment.
Remark. The proof of Theorem 3 follows from a similar approach to the proof of Theorem 1. The key difference and most challenging part of proving Theorem 3 stem from the relaxation on ‖∇θL(θ(k),ω(k)) −G(k)θ ‖2 and ‖∇ωL(θ(k),ω(k)) −G (k) ω ‖2. Thanks to the bounded variance in Assumption 3 and the selectedNs in Eq. (10), we are able to derive outer-loop bounds for primal and dual gaps, respectively. We refer readers to the Appendix for the details of the complete proof.
5 EXPERIMENTAL RESULTS
In this section, we conduct numerical experiments to verify our theoretical results. We compare our work with the basic stochastic gradient (SG) method (Lin et al., 2020b) and three state-of-the-art algorithms for PE: nPD-VR (Wai et al., 2019), STSG (Qiu et al., 2020) and VR-STSG (Qiu et al., 2020). Due to space limitation, we provide our detailed experiment settings in the Appendix.
200 600 1000 Trajectory length
0.075 0.100 0.125 0.150 0.175 M SE
Mountaincar-v0 Linear Nonlinear
200 600 1000 Trajectory length
0.10
0.15
0.20
0.25
M SE
Cartpole-v0 Linear Nonlinear
Figure 5: MSE comparison with 10 trials.
Numerical Results: We set the constant learning rates α = 10−3, β = 10−1, mini-batch size q = d √ Me, constant c = 32 and solution accuracy
= 10−3. First, we compare the loss value and gradient norm performance based on MountainCar-v0 and Cartpole-v0 with nPD-VR, SG, STSG, and VR-STSG in Figs. 1 and 2. We set the constraintW = [0, 10]n and initialize all algorithms at the same point, which is generated randomly from the normal distribution. We can see that VR-STSG and nPD-VR slowly converge after 40 epochs, while STSG and SG fail to converge. VRPD converges faster than all the other algorithms with the same step-size values. As for Cartpole-v0, we clearly see a trend of approaching zero-loss with VRPD. These results are consistent with our theoretical result that one can use a relatively large step-size with VRPD, which leads to faster convergence. Also, we compare the sample complexity of VRPD and VRPD+ in MountainCar-v0 and Cartpole-v0, and the results are shown in in Figs. 3 and 4, respectively. We can see
that VRPD+ converges to the same level with much fewer samples than VRPD does. Next, we compared the mean squared error(MSE) between the ground truth value function and the estimated value function over 10 independent runs with linear approximation and nonlinear approximation. In Fig. 5, with the same amount of parameter size, nonlinear approximation always achieves smaller MSE than linear approximation (Du et al., 2017). Further experiments on the performance of J(θ) are shown in the supplementary material.
6 CONCLUSION
In this paper, we proposed and analyzed two algorithms called VRPD and VRPD+ for policy evaluation with nonlinear approximation. The VRPD algorithm is based on a simple single-timescale framework by utilizing variance reduction techniques. The VRPD algorithm allows the use of constant step-sizes and achieves an O(1/K) convergence rate. The VRPD+ algorithm improves VRPD by further applying an adaptive batch size based on historical stochastic gradient information. Our experimental results also confirmed our theoretical findings in convergence and sample complexity. | 1. What is the focus and contribution of the paper on policy evaluation with nonlinear function approximation?
2. What are the strengths of the proposed algorithm, particularly in terms of variance reduction techniques and convergence rate?
3. What are the weaknesses of the paper regarding its convergence metric and comparison with prior works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Can the authors establish guarantees for the two terms in Equation 9 separately?
6. How does the reviewer suggest improving the paper regarding its technical novelty and comparison with previous works? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies policy evaluation with nonlinear function approximation. It proposes and analyzed two algorithms called VRPD and VRPD+ to optimize the primal-dual form of the MSBPE. The VRPD algorithm utilizes variance reduction techniques to achieve
O
(
1
/
K
)
convergence rate.
Strengths And Weaknesses
Strength:
Paper is well-written and easy to follow.
Compared with previous work on the non-convex-strongly-concave minimax optimization problem, the convergence rate of this work achieves
O
(
1
/
K
)
, which is the same as before and has fewer constraints.
Weakness:
The convergence metric is unconventional. The first term in Eqn (9) is the full gradient w.r.t.
θ
, which (I believe) includes both the partial derivative of the first input and the chained gradient w.r.t.
θ
via the second input. This first term already suffices to guarantee a first-order stationary point, rendering the second term redundant. On the other hand, bounding the partial gradient of both inputs is also acceptable. I wonder why the analysis of this paper fails to provide a guarantee for either of these metrics.
The technical novelty of the minimax optimization technique used here is not clearly stated and compared with previous works. I believe it helps to also make a list to compare with previous NCSC minimax optimization algorithms.
Clarity, Quality, Novelty And Reproducibility
The paper is of high quality and clarity.
Questions:
Regarding the weakness mentioned above, can you establish guarantees for the two terms in Eqn (9) separately?
It seems to sample a trajectory with a length of
M
cannot be used as a finite-sample approximation of the stationary distribution. In discounted setting, the classical way to do so is to run the trajectory with a stopping probability of
1
−
γ
, and only use the last step as an unbiased sample from the stationary distribution. This doesn't really affect your optimization results but should be made clear under the context of RL or policy evaluation. |
ICLR | Title
On Stability and Generalization of Bilevel Optimization Problems
Abstract
(Stochastic) bilevel optimization is a frequently encountered problem in machine learning with a wide range of applications such as meta-learning, hyper-parameter optimization, and reinforcement learning. Most of the existing studies on this problem only focused on analyzing the convergence or improving the convergence rate, while little effort has been devoted to understanding its generalization behaviors. In this paper, we conduct a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization problem. We first establish a fundamental connection between algorithmic stability and generalization gap in different forms and give a high probability generalization bound which improves the previous best one from O( √ n) to O(log n), where n is the sample size. We then provide the first stability bounds for the general case where both inner and outer level parameters are subject to continuous update, while existing work allows only the outer level parameter to be updated. Our analysis can be applied in various standard settings such as strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC). Our analysis for the NC-NC setting can also be extended to a particular nonconvex-stronglyconvex (NC-SC) setting that is commonly encountered in practice. Finally, we corroborate our theoretical analysis and demonstrate how iterations can affect the generalization gap by experiments on meta-learning and hyper-parameter optimization.
N/A
√ n) to O(log n), where n is the
sample size. We then provide the first stability bounds for the general case where both inner and outer level parameters are subject to continuous update, while existing work allows only the outer level parameter to be updated. Our analysis can be applied in various standard settings such as strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC). Our analysis for the NC-NC setting can also be extended to a particular nonconvex-stronglyconvex (NC-SC) setting that is commonly encountered in practice. Finally, we corroborate our theoretical analysis and demonstrate how iterations can affect the generalization gap by experiments on meta-learning and hyper-parameter optimization.
1 INTRODUCTION
(Stochastic) bilevel optimization is a widely confronted problem in machine learning with various applications such as meta-learning (Finn et al., 2017; Bertinetto et al., 2018; Rajeswaran et al., 2019), hyper-parameter optimization (Franceschi et al., 2018; Shaban et al., 2019; Baydin et al., 2017; Bergstra et al., 2011; Luketina et al., 2016), reinforcement learning (Hong et al., 2020), and few-shot learning (Koch et al., 2015; Santoro et al., 2016; Vinyals et al., 2016). The basic form of this problem can be defined as follows
min x∈Rd1
R(x) = F (x,y∗(x)) := Eξ [f (x,y∗(x); ξ)]
s.t. y∗(x) = arg min y∈Rd2
{G(x,y) := Eζ[g(x,y; ζ)]} , (1)
where f : Rd1 × Rd2 → R and g : Rd1 × Rd2 → R are two continuously differentiable loss functions with respect to x and y. Problem (1) has an optimization hierarchy of two levels, where the outer-level objective function f depends on the minimizer of the inner-level objective function g.
Due to its importance, the above bilevel optimization problem has received considerable attention in recent years. A natural way to solve problem (1) is to apply alternating stochastic gradient updates with approximating ∇yg(x,y) and ∇f(x,y), respectively. Briefly speaking, previous efforts mainly examined two types of methods to perceive an approximate solution that is close to the optimum y∗(x). One is to utilize the single-timescale strategy (Chen et al., 2021; Guo et al., 2021; Khanduri et al., 2021; Hu et al., 2022), where the updates for y and x are carried out simultaneously. The other one is to apply the two-timescale strategy (Ghadimi & Wang, 2018; Ji et al., 2021;
Hong et al., 2020; Pedregosa, 2016), where the update of y is repeated multiple times to achieve a more accurate approximation before conducting the update of x.
While there is a long list of work on bilevel optimization, most of the existing work only focuses on either analyzing its convergence behaviors (Ghadimi & Wang, 2018; Hong et al., 2020; Ji et al., 2021) or improving its convergence rate, based on the convexity and the smoothness properties of f(·, ·) and/or g(·, ·) (Liu et al., 2020; Li et al., 2020). Contrarily, only little effort is devoted to understanding the generalization behavior of the problem. To the best of our knowledge, there is only one recent work on the generalization analysis for bilevel problems (Bao et al., 2021), which presents the first expected uniform stability bound. However, there are still several undesirable issues in this work: (1) Their result is only for the uniform stability (which could be deduced from argument stability with certain conditions, see Definition 4 for details), leaving the analysis of other stronger definitions of algorithmic stability open; (2) Additionally, the UD algorithm allows the outer level parameters to be updated continuously but needs to reinitialize the inner level parameters before each iteration in the inner loop, which is not commonly used in practice due to their inefficiency (see line 4 in Algorithm 3). (3) The proof of Theorem 2 in their work is unclear to show whether the update of outer level parameters is argument dependent on the inner level parameters, where may exist some gap in the analysis of UD algorithm (see Appendix E for detailed discussions). (4)Their experiments take only hyper-parameter optimization into consideration and neglect other applications in the bilevel optimization instances.
To address all the aforementioned issues, we give in this paper a thorough analysis on the generalization behaviors of first-order (gradient-based) methods for general bilevel optimization problem. We employ the recent advances of algorithmic stability to investigate the generalization behaviors in different settings. Specifically, our main contributions can be summarized as follows:
• Firstly, we establish a fundamental connection between generalization gap and different notations of algorithmic stability (argument stability and uniform stability) for any randomized bilevel optimization algorithms in both expectation and high probability forms. Specifically, we show that the high probability form of the generalization gap bound can be improved from O( √ n) to O(log n) compared with the result in Bao et al. (2021).
• Next, we present the stability bounds for gradient-based methods with either singletimescale or two-timescale update strategy under different standard settings. To the best of our knowledge, this work provides the first stability bounds for the two-timescale (double loop) algorithms, which allows the accumulation of the sub-sampled gradients in the inner level. In detail, we consider the settings of strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC), and further extend our analysis to a particular nonconvex-strongly-convex (NC-SC) setting that is widely appeared in practice. Table 1 is the summary of our main results.
• Thirdly, we provide the first generalization bounds for the case where both the outer and inner level parameters are subject to continuous (iterative) changes. Compared to the previous work (Bao et al., 2021), our work does not need the reinitialization step before each iteration in the inner level and hence our algorithm can carry over the last updated inner level parameters, which is more general and practical.
• Finally, we conduct empirical studies to corroborate our theories via meta-learning and hyperparameter optimization, which are two applications of bilevel optimization.
Due to space limitations, all the proofs and additional experiments are included in Appendix.
1.1 RELATED WORK
Research at the interface between generalization and the bilevel problem can be roughly classified into two categories. The first one includes all the research on bilevel optimization. In recent decades, extensive studies have been done on this topic, which suggests that bilevel optimization has a wide range of applications in machine learning such as hyper-parameter optimization (Franceschi et al., 2018; Lorraine & Duvenaud, 2018; Okuno et al., 2021), meta learning (Bertinetto et al., 2018; Rajeswaran et al., 2019; Soh et al., 2020) and reinforcement learning (Yang et al., 2018; Tschiatschek et al., 2019). Most of the existing work studies the problem from an optimization perspective. For example, Ghadimi & Wang (2018); Ji et al. (2021) provide the convergence rate analysis based on the nonconvex-strongly-convex assumption for the two functions f(·, ·) and g(·, ·). (Grazzi et al., 2020) considers the iteration complexity for hypergradient computation. (Liu et al., 2020; Li et al., 2020) present an asymptotic analysis for the convex-strongly-convex setting. Perhaps the most related one to ours from the generalization standpoint (i.e., the expectation of population risk and empirical risk) is Bao et al. (2021), while there may exist some gap in the analysis of UD algorithm. In this work, we employ a novel approach to examine the stability bounds of bilevel optimization problems. Firstly, our work analyzes the generalization behavior by observing how different settings can have an impact on the stability bounds directly. Secondly, our work adopts a stronger version of stability called argument stability, which can imply the previously used uniform stability if the function is sufficiently smooth. Furthermore, our work does not need to reinitialize the inner-level parameters and allows them to carry over their last updated parameters at each time updating the inner level. This indicates that y in the inner level is updated iteratively and depends on the current parameter of x, which is more common and efficient in practice.
The second category includes all the work on stability analysis. There is a long list of research on stability and generalization (Bousquet & Elisseeff, 2002; Mukherjee et al., 2006; Shalev-Shwartz et al., 2010). Bousquet & Elisseeff (2002) first introduces the notion of uniform stability and establishes the first framework of stability analysis. Hardt et al. (2016) later extends the stability analysis to iterative algorithms based on stochastic gradient methods for the vanilla stochastic optimization. After that, there are subsequent studies on generalization analysis for various problems via algorithmic stability, such as minmax problems (Lei et al., 2021; Farnia & Ozdaglar, 2021; Zhang et al., 2021) and pairwise learning (Yang et al., 2021; Lei et al., 2020; Xue et al., 2021; Huai et al., 2020). However, it is notable that due to the additional stochastic function in the constraint in the bilevel optimization, all the previous techniques and results cannot be applied to our problem. Although the generalization analysis of minmax optimization is somewhat similar to ours, it involves only one objective function f and a single level in algorithms for typical minmax optimization problems, while in the bilevel optimization algorithms there is an inner level and an outer level, which is considerably more challenging.
2 PRELIMINARIES
2.1 DEFINITIONS AND ASSUMPTIONS
In the following, we give some necessary definitions and assumptions that are widely used in bilevel optimization (Ghadimi & Wang, 2018; Ji et al., 2021; Khanduri et al., 2021) and generalization analysis (Hardt et al., 2016; Lei et al., 2021). Definition 1 (Joint Lipschitz Continuity). A function f(x,y) is jointly L-Lipschitz over Rd1 × Rd2 , if for all x ∈ Rd1 ,y ∈ Rd2 , the following holds, |f(x,y) − f(x′,y′)| ≤ L √ ∥x− x′∥22 + ∥y − y′∥ 2 2.
Definition 2 (Smoothness). A function f is l-smooth over a set S if for all u,w ∈ S the following is true, ∥∇f(u)−∇f(w)∥ ≤ l∥u− w∥. Definition 3 (Strong Convexity). A function f is µ-strongly-convex over a set S, if for all u,w ∈ S, the following holds, f(u) + ⟨∇f(u), w − u⟩+ µ2 ∥w − u∥
2 ≤ f(w). Assumption 1 (Inner-level Function Assumption). We assume the inner stochastic function g(x,y) in (1) satisfies the following: (i) g(x,y) is jointly Lg-Lipschitz for any x ∈ Rd1 and y ∈ Rd2 . (ii) g(x,y) is continuously differentiable and lg-smooth for any (x,y) ∈ Rd1 × Rd2 .
Assumption 2 (Outer-level Function Assumption). We assume the outer stochastic function f(x,y) in (1) satisfies the following: (iii) f(x,y) is jointly Lf -Lipschitz for any x ∈ Rd1 and y ∈ Rd2 . (iv) f(x,y) is continuously differentiable and lf -smooth for any (x,y) ∈ Rd1 × Rd2 .
2.2 PROBLEM FORMULATION
Given two distributions D1 and D2, in the (stochastic) optimization problem we aim to find the minimizer of Problem (1). However, since the distributions are often unknown, in practice we only have two finite-size datasets Dm1 = {ξi | i = 1, ...,m1} ∼ D m1 1 and Dm2 = {ζi | i = 1, ...,m2} ∼ Dm22 , where each ξi and ζi are i.i.d. sampled from D1 and D2, respectively. Based on these datasets, we will design some (randomized) algorithm A with output A(Dm1 , Dm2) = (x,y) ∈ Rd1 ×Rd2 . Our goal is to investigate the generalization behavior of such output. Note that although there are two stochastic functions in the bilevel optimization problem, we only care about the generalization of the outer-level one since it is the one that we prefer to minimize.
Below we define the generalization gap to measure the generalization behavior. Given distribution D1 and a finite data Dm1 ∼ D m1 1 , the population risk function R(x,y,D1) of x, y on D1 is defined as R(x,y,D1) := Eξ∼D1 [f (x,y(x); ξ)], and its empirical risk function on Dm1 is Rs(x,y, Dm1) = 1 m1 ∑m1 i=1 [f (x,y(x); ξi)]. Moreover, for a fixed hyperparameter x ∈ Rd1 and y(x) ∈ Rd2 ( note that y(x) might be dependent on x), we define the difference between the population risk and the empirical risk over (x,y(x)) as the bilevel generalization gap of (x,y(x)): Es[R(x,y) − Rs(x,y)], where Es denotes the expectation of Dm1 ∼ D m1 1 . When there is no ambiguity, we simplify thereafter the notations as follows: R(x,y,D1) = R(x,y) and Rs(x,y, Dm1) = Rs(x,y). Our goal is thus to analyze the bilevel generalization gap of the output of algorithm A(Dm1 , Dm2) based on Dm1 and Dm2 . Since the generalized error depends on the algorithm itself, in the following we will introduce the algorithms to be considered in this paper.
Most of the existing algorithms adopt the following idea: first approximate y∗ on Dm2 for a given parameter x in the inner level and then seek the hyperparameter x∗(Dm1 , Dm2) with corresponding hypothesis y∗(x∗(Dm1 , Dm2), Dm2) by the below estimation:
x̂(Dm1 , Dm2) ≈ argminx Rs(x, ŷ(x, Dm2), Dm1),
where ŷ(x, Dm2) ≈ argminy Gs(x,y, Dm2), (2)
where Gs(x,y, Dm2) is the empirical risk of G(x,y) over Dm2 , i.e., G(x,y, Dm2) = 1 m2 ∑m2 i=1 g (x,y(x); ζi). Most of the current gradient-based (first-order) algorithms for approximating (2) can be categorized into two classes: single-timescale methods and two-timescale methods. The single-timescale method performs the updates for y and x simultaneously via stochastic gradient descent (SGD), while the two-timescale method updates y multiple times before updating x (via stochastic gradient descent). As there are numerous approaches for both classes (see Related Work section for details), in this paper we will analyze the generalization behaviors for the most classical and standard one in each class, i.e., single-timescale SGD (SSGD; Algorithm 1) and twotimescale SGD (TSGD; Algorithm 2). There is a long list of work (Chen et al., 2021), (Ghadimi & Wang, 2018; Ji et al., 2021) based on either SSGD or TSGD.
3 GENERALIZATION AND STABILITY FOR BILEVEL OPTIMIZATION
Algorithmic stability is one of the classical approaches to analyzing the generalization bound for algorithms. Roughly speaking, the algorithmic stability of (randomized) algorithm A measures how the output of algorithm A changes if we change one data sample in the input dataset. While there are various notions of stability, most of the existing work on analyzing the stability of stochastic optimization, pairwise learning and minimax optimization focuses on the uniform-stability (Bousquet & Elisseeff, 2002) and the argument-stability (Liu et al., 2017; Lei & Ying, 2020). Thus, we also adopt these two notions of stability for the bilevel optimization problem. Briefly speaking, uniformstability focuses on the resulting change in population risk function, while the argument-stability considers the resulting change in arguments, i.e., the output of the algorithm. Definition 4 (Algorithmic Stability). Let A : Dm11 ×D m2 2 7→ Rd1×Rd2 be a randomized algorithm.
Algorithm 1 Single-timescale SGD (SSGD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0, Datasets Dm1 and Dm2
2: Output: xK , yK 3: for k = 0 to K − 1 do 4: Uniformly sample i ∈ [m2], j ∈ [m1] 5: yk+1 = yk − αy∇yg(xk,yk(xk); ζi) 6: xk+1 = xk − αx∇f(xk,yk(xk); ξj) 7: end for 8: return xK and yK
Algorithm 2 Two-timescale SGD (TSGD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0 2: Output: xK , yK 3: for k = 0 to K − 1 do 4: y0k ← yTk−1 5: for t = 0 to T − 1 do 6: Uniformly sample i ∈ [m2] 7: yt+1k = y t k − αy∇yg(xk,ytk(xk); ζi) 8: end for 9: Uniformly sample j ∈ [m1]
10: xk+1 = xk − αx∇f(xk,yTk (xk); ξj) 11: end for 12: return xK , yTK
(a) A is β-uniformly-stable if for all datasets Dm1 , D ′ m1 ∼ D m1 1 and Dm2 ∼ D m2 2 such that
Dm1 and D ′ m1 differ in at most one sample, we have the following for any ξ ∼ D1:
EA[|f(A(Dm1 , Dm2), ξ)− f(A(D′m1 , Dm2), ξ)|] ≤ β.
A is β-uniformly-stable with probability at least 1 − δ if we have the following for any ξ ∼ D1 with probability at least 1− δ:∣∣f(A(Dm1 , Dm2), ξ)− f(A(D′m1 , Dm2), ξ)∣∣ ≤ β.
(b) A is β-argument-stable in expectation if for all datasets Dm1 , D ′ m1 ∼ D m1 1 and Dm2 ∼
Dm22 such that Dm1 and D′m1 differ in at most one sample, we have:
EA[∥A(Dm1 , Dm2)−A(D′m1 , Dm2)∥2] ≤ β.
Note that the definition of uniform stability in expectation is the same as the definition in (Bao et al., 2021). Thus, our other definitions can be considered as extensions of the previous stability for bilevel optimization. In the following, we present Theorem 1 as our first result, which shows a crucial relationship between generalization gap and algorithmic stability for an algorithm A. Theorem 1. Let A : ξm1 × ζm2 7→ Rd1 × Rd2 be a randomized BO algorithm.
(a) If A is β-uniform-stable in expectation, then the following holds for Dm1 ∼ D m1 1 , Dm2 ∼
Dm22 : EA,Dm1 [R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))] ≤ β.
(b) If A is β-argument-stable in expectation and Assumption 2 holds, then the following holds for Dm1 ∼ D m1 1 , Dm2 ∼ D m2 2 :
EA,Dm1 [R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))] ≤ Lfβ.
(c) Assume that |f(x,y; ξ)| ≤ M for some M ≥ 0. If A is β-uniform-stable almost surely, then for Dm1 ∼ D m1 1 , Dm2 ∼ D m2 2 , the following holds with probability 1− δ:
|R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))| ≤ 2β + e (
4M √ m1
√ log e
δ + 12 √ 2β⌈log2 m1⌉
√ log e
δ ) where e is the base of the natural logarithms.
Remark 1. The above theorem suggests that the generalization gap can be controlled by several notions of algorithmic stability. Part (a) and Part (b) show that the expectation of generalization gap can be bounded by uniform stability and argument stability with the Lipschitz constant, respectively; Part (c) indicates that the generalization gap for the algorithm is no more than O(β log(m1) +
1/ √ m1) with probability 1 − δ. Compared with the existing work (Bao et al., 2021), Theorem 1 considers argument stability additionally, which is a stronger notion of stability than uniform stability (since uniform stability can be deduced from argument stability with the condition that the function is sufficiently smooth). Moreover, we use the McDiarmid’s inequality and the equivalence of tails and moments for the random variable with a mixture of sub-gaussian and sub-exponential tails (Lemma 1 in Bousquet et al. (2020)), which provide a significantly improved high probability bound in Part (c) (i.e., improving from O(β√m1) in Bao et al. (2021) to O(β logm1)).
4 STABILITY ANALYSIS FOR BILEVEL OPTIMIZATION ALGORITHMS
Motivated by Theorem 1, we can see that to analyze the generalization behaviors for any algorithm, it is sufficient to analyze its stability. As mentioned in the previous Section 2.2, we will consider the stability of SSGD and TSGD. For simplicity we let SC-SC denote the case where f and g both are strongly convex functions. C-C, NC-NC, and NC-SC are also denoted in a similar manner with ”C” representing convex function and ”NC” representing nonconvex function.
4.1 STABILITY BOUNDS FOR SINGLE-TIMESCALE SGD
As we can see from Algorithm 1, SSGD updates y and x simultaneously. In the following we develop stability bounds for this algorithm in different settings. Theorem 2. Suppose that Assumptions 1 and 2 hold and Algorithm A is SSGD with K iterations:
(a) Assume that Problem (1) is SC-SC with strongly convexity parameters µf and µg . Let αx = αy (see Lemma 9 for details) be the step sizes. Denote l = max{lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ((
L2f + L 2 g ) 1 2 ( m1 ( µf + µg − (αxl)2/2 + 0.25 ))−1) .
(b) Assume that Problem (1) is C-C. Let αx, αy be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m−11 √ (αxLf ) 2 + (αyLg) 2 ( 2 + 2max { (αxlf ) 2 , (αylg) 2 })K/2) .
(c) Assume that Problem (1) is NC-NC. Let the step sizes satisfy max {αx, αy} ≤ c/k for some constant c ≥ 0 and l = max {lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ( (m1cl) −1 ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1 ) ,
where lf , lg and Lf , Lg are smoothness constants and Lipschitz constants for f , g, respectively. Remark 2. Note that the above stability bounds are independent of the specific form of the objective function f(·, ·) and the exact form of the sample distribution D1, which are more reliant on the properties of the loss functions and sample size m1, and the stability bounds in the C-C and NC-NC cases are related to the number of iterations additionally. Specifically, Part(a) establishes a stability bound of O(1/m1) in the SC-SC setting and Part(b) considers a C-C case with a stability bound O(κK/21 /m1) related to the number of iterations and the data size, where κ1 is a constant. The NC-NC case is discussed in Part(c) which provides a stability bound ofO(K cl cl+1 /m1), where c is a constant to control the step size and l is the larger smoothness number of lf and lg . The conclusions here match the existing results in minmax problems (Lei et al., 2021; Farnia & Ozdaglar, 2021).
4.2 STABILITY BOUNDS FOR TWO-TIMESCALE SGD
Compared with the above SSGD, Two-timescale SGD (TSGD; Algorithm 2) always achieves more accurate approximate solutions by updating y multiple times before updating x. In this section, we extend our analysis from SSGD to TSGD. Particularly, compared with the results in Bao et al. (2021), we provide stability bounds in Theorem 3 for the case where the inner level parameter
(y) is updated iteratively (i.e., consistency). We further explore in Theorem 4 a particular NC-SC setting, which is commonly appeared in bilevel optimization applications such as meta learning and hyperparameter optimization. Theorem 3. Suppose that Assumptions 1 and 2 hold and |g(·, ·)| ≤ 1. Let A be the TSGD algorithm with K outer-iterations and T inner-iterations. Then we have
(a) Assume that Problem (1) is SC-SC. Let l = max{lf , 1+(αylg) 2
(1−αylg)αy } and α = αx = αy ≤ min{1/lg, 1/(µf + µg)} be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m1 −1 √ L2fα 2 x + ( 2T αy(2− αylg) )2 (1 + αl) K ) .
(b) Assume that Problem (1) is C-C. Let αl = max{αxlf , 1+(αylg) 2
1−αylg } and αx, αy ≤ 1 lg be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m−11 √ L2fα 2 x + ( 2T αy(2− αylg) )2 (1 + αl) K ) .
(c) Assume that Problem (1) is NC-NC. Let the step sizes satisfy max {αx, αy} ≤ c/k for some constant c ≥ 0 and l = max {lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ( (m1Tcl) −1 ( 2cLf √ l2f + T 2l2g ) 1 Tcl+1 ·K Tcl Tcl+1 ) .
Remark 3. Compared with the previous results for SSGD, the stability bounds of TSGD depend on the number of iterations in the outer level loop, the number of iterations in the inner level loop, and the data size in the outer level loop. If the step sizes are sufficiently small, we can see that the bounds in Theorem 3 are asymptotically the same as the bounds of SSGD in Theorem 2. Thus, Theorem 3 can be considered as a generalization of the previous one. The dependence on T also reveals our novelty compared with the existing work of stability analysis for other problems, such as simple SGD and minmax problems. To the best of our knowledge, this work provides the first stability bounds for the two-timescale (double loop) algorithms, which allows the accumulation of the sub-sampled gradients in the inner level. Remark 4. Comparing our results with the ones in (Bao et al., 2021), we have the following observations. 1) They only established the uniform stability bound for the Unrolled Differentiation algorithm 3, where the algorithm is reinitialized at each time entering the inner level loop, indicating that it takes into account the changes to only one parameter in the outer level loop, while our algorithm considers the update for both parameters. 2) Its proof needs to assume that the update of y in the inner level after the reinitialization will not be affected by the value specified for x. However, this assumption is quite uncommon and is probably the reason that they do not need to make any assumption on the inner level objective function (see Appendix E in details). In contrast, our work allows the inner level parameters to be updated consistently (i.e., carrying over the value in the last update), instead of being reinitialized at each time entering the inner level loop. Specifically, we allow yTk to be employed at the beginning of the (k+1)-th outer level iteration, rather than y0. This enables us to obtain different stability bounds for different inner level objective functions from a novel perspective.
In the following, we extend our analysis to a particular NC-SC setting that is frequently encountered in real-world applications and optimization analysis. Theorem 4. Suppose that Assumptions 1 and 2 hold, 0 ≤ f(·, ·) ≤ 1 and Problem (1) is NC-SC. Let A be the TSGD Algorithm with K outer-iterations and T inner-iterations with max {αx, αy} ≤ c/k for constant c ≥ 0. Denote l = max {lf , lg}. Then, A is β-uniform-stable in expectation, where
β ≤ O ( 2cLf √ l2f + l 2 gT 2 ) 1 c(Tl+l−µg)+1 ·K c(Tl+l−µg) c(Tl+l−µg)+1 (T l + l − µg + 2/c)
m1(T l + l − µg)
.
Remark 5. Compared with our previous analysis, we now sketch the technique differences in our analysis. We consider the bound of the term (δx,k, δy,k)T = (∥xk − x′k∥, ∥yk − y′k∥)T , while we employ δk = √ ∥xk − x′k∥ 2 2 + ∥yk − yk′∥22 in the previous analysis, where (xk,yk), (x′k,y ′ k) are the outputs of TSGD after k iterations for Dm1 and D ′ m1 respectively with Dm1 and D′m1 differing in one sample. In the NC-SC setting, we show that (δx,k+1, δy,k+1) T ≤ ((1 + αxl)δx,k, (1 + αxT l)δy,k) T (≤ means the entry-wise inequality), which means our term can be controlled. Then, we take the expectation of it to derive our uniform stability bound. To achieve the generalization gap over continuously changing parameters, it is imperative to take into account the growth of (δx,k, δy,k) instead of δx,k in (Bao et al., 2021). Appendix C.3 provides more details.
Thus, based on our previous results, we now provide the first generalization bounds in the NC-NC setting for both SSGD and TSGD. Corollary 5. Assume that the problem is NC-NC, |f(·, ·; ξ)| ≤ 1 for all ξ, and Assumptions 1 and 2 hold. Denote l = max{lf , lg} with max{αx, αy} ≤ c/k for constant c ≥ 0. Then, the generalization gap of SSGD 1 with K iterations is bounded by O(K cl cl+1 /m1).
Corollary 6. Assume that the problem is NC-NC, |f(·, ·; ξ)| ≤ 1 for all ξ, and Assumptions 1 and 2 hold. Let l = max{lf , lg} with max{αx, αy} ≤ c/k. Then, the generalization gap of TSGD 2 with K outer iterations and T inner iterations is bounded by O(T 1 Tcl+1K1− 1 Tcl+1 /(m1)).
Remark 6. By Theorem [1, 2, 3], we can derive the above corollaries on generalization gap from stability bounds. Corollary 5 and Corollary 6 show that extremely high number of iterations (K for SSGD and K,T for TSGD) will drastically reduce the stability of these algorithms and increase the generalization gap, which will make these algorithms increase the risk of overfitting. We will also verify it in the following experiments.
5 EXPERIMENTS
In this section, we empirically validate our previous theoretical results on real world datasets. Two experiments, including meta-learning and hyperparameter optimization, are conducted via Algorithm 2 TSGD (note that when T = 1, TSGD is just SSGD). Due to the space limitation, we just present the meta learning experiment here, leaving the hyperparameter optimization experiment and other details in the Appendix D.
5.1 META LEARNING
Consider the few-shot meta-learning problem with M tasks {Ti, i = 1, ...,M} sampled from distribution PT . We aim to learn a model that can rapidly adapt to different tasks. Firstly, the embedding model ϕ is shared by all tasks to learn embedded features. Secondly, the task-specific parameter wi is to adapt the shared embedding to its own sub-problem. Thus, the overall problem of meta-learning can be formulated as follow:
min ϕ LD (ϕ, w̄∗) = Eξ∈Dtei ,Ti [L (ϕ,w ∗ i ; ξ)] , (3a)
s.t. w̄∗ = argmin w̄
[ LDtr(ϕ, w̄) = ETi [ LDtri (ϕ,wi) ]] . (3b)
where Dtri and Dtei are the training and testing datasets for task Ti. Each wi is computed from one or more gradient descent updates from w̄ on the corresponding task (rapid adaptation), i.e., wi = w̄ − α∇w̄LDtr (ϕ,wi). In the inner level, the base learner optimizes the series of wi for each tasks (Equation 3b). In the outer level, the meta-learner optimizes the embedding model ϕ using the minimizers w∗i learned from the inner level and computes the loss from the testing dataset (Equation 3a).
Settings and Implementation We evaluate the behavior of the 5-way-1-shot task on the Omnilot dataset (Lake et al., 2015), i.e., it aims to classify 5 unseen classes from only 1 labeled sample. It contains 1623 different handwritten characters from 50 different alphabets. The image is in greyscale with a size 28 × 28. We follow similar settings in Ji et al. (2021). A five-layer fully-connected network is constructed, where the task-specific parameter wi corresponds to the last layer of the
network and the shared embedding model ϕ corresponds to all preceding layers. Thus, we train the two sets of layers separately in the outer and inner level of optimization. We build our model and establish our training using the software library learn2learn (Arnold et al., 2020). We follow the official train-validation-test partition and train ϕ, wi using the training set. The size of each layer in the network is 784 → 256 → 128 → 64 → 64 → 5. We set the number of tasks for training and testing set to 2000 and the batch size of tasks to 32. The learning rate of ϕ and wi are 0.002 and 0.01, respectively. Results are evaluated based on the average of 5 trial runs with different random seeds.
Results Evaluation Figure 1 presents the learning curves on training set, testing set and the generalization gap with different values of inner iterations T and outer iterations K. Generalization gap is estimated by the difference between training and testing loss. On one hand, it can be seen that the model easily overfits on the testing set as K increases drastically (Figure 1b) and the effect of T is very limited. On the other hand, with an appropriate value of K, smaller T (i.e) will result in underfitting on the testing loss (T = 1 in the Figure 1c causes highest generalization gap due to the underfitting training process). The trend of generalization gap in terms of K and T indicates that large values of iteration numbers will increase the risk of overfitting, which matches with our analysis in Theorem 4 that the stability of TSGD 2 will decrease drastically.
6 CONCLUSION
We give a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization framework. In particular, we establish a quantitative connection between generalization and algorithmic stability and provide the first generalization bounds of the continuous updates for inner parameters and outer parameters in multiple settings. Our experiments suggest that inappropriate iterations will cause underfitting and overfitting easily. The tendency of generalization gap also validates our theoretical results.
From the discussion in previous sections, we only discussed the first-order method, while there exist a number of estimating second-order and momentum-based approaches to solve the bilevel optimization problem. Dealing with the approximation of hypergradient in generalization analysis is another direction for future work.
A COMPARISON BETWEEN UD AND TSGD
Algorithm 3 Unrolled differentiation (UD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0 2: Output: xK , yK 3: for k = 0 to K − 1 do 4: y0k ← y0 5: for t = 0 to T − 1 do 6: yt+1k = y t k−αy∇yg(xk,ytk(xk);Dm2) 7: end for 8: xk+1 = xk − αx∇f(xk,yTk (xk);Dm1) 9: end for
10: return xK , yTK
Algorithm 4 Two-timescale SGD (TSGD) Input: number of iterations K, step sizes αx, αy , initialization x0,y0, Datasets: Dm1 , Dm2
Output: xK , yK for k = 0 to K − 1 do y0k ← yTk−1 for t = 0 to T − 1 do
yt+1k = y t k−αy∇yg(xk,ytk(xk);Dm2)
end for xk+1 = xk − αx∇f(xk,yTk (xk);Dm1)
end for return xK , yTK
B PROOF OF PRELIMINARIES
B.1 THE PROOF OF THEOREM 1
Proof of Part (a). Since ξ and ξi are drawn from the same distribution, we know EA[R(A(Dm1 , Dm2),D1)−Rs(A(Dm1 , Dm2), Dm1)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(Dm1 , Dm2), ξ)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(ξ, ξ2, .., ξi−1, ξi+1, ...ξm1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(D ′ m1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] ≤ β, where D′m1 and Dm1 differ in at most one sample ξi.
Proof of Part (b). Similarly, we have EA[f(A(Dm1 , Dm2),D1)− f(A(Dm1 , Dm2), Dm1)] = EA,ξiDm1 ,ξ∼D1 [f(A(Dm1 , Dm2), ξ)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(ξ, ξ2, .., ξi−1, ξi+1, ...ξm1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(D ′ m1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)]
≤ EA,ξi∈Dm1 ,ξ∼D1 [Lf∥A(D ′ m1 , Dm2)−A(Dm1 , Dm2)∥ ≤ Lfβ.
To prove high probability bounds, we need the following lemma on the concentration behavior on the summation of weakly dependent random variables. Lemma 7 (Bousquet et al. 2020). Let Z = (Z1, . . . , Zn) be a vector of independent random variables with each taking values in Z , and g1, . . . , gn be some functions gi : Zn → R such that the following holds for any i ∈ [n] :
• |E [gi(Z) | Zi]| ≤M a.s., • E [ gi(Z) | Z[n]\{i} ] = 0 a.s.,
• gi has a bounded difference β with respect to all variables except for the i-th variable.
Then, for any p ≥ 2, ∥∥∥∥∥ n∑
i=1
gi(Z) ∥∥∥∥∥ p ≤ 12 √ 2pnβ ⌈log2 n⌉+ 4M √ pn,
where the Lp-norm of a random variable Z is denoted by ∥Z∥p := (E[|Z|p])1/p, p ≥ 1.
Next, we state the following well-known relationship between tail bounds and moment bounds.
Lemma 8 (Bousquet et al. 2020; Vershynin 2018). Let a, b ∈ R+. Let Z be a random variable with ∥Z∥p ≤ √ pa+ pb and p ≥ 2. Then, for any δ ∈ (0, 1), we have, with probability at least 1− δ
|Z| ≤ e ( a √ log( e
δ ) + b log(
e δ ) ) .
Proof of Part (c). In order to make use of Lemma 7 to obtain the generalization bounds, we will introduce:
hi = Eξ′i∼D1 [Eξi∼D1 [f(A(D i m1 , Dm2 ; ξ))]− f(A(D i m1 , Dm2 ; ξi)],
where Dim1 = {ξ1, ξ2, ..., ξi−1, ξ ′ i, ξi+1, ..., ξm1}, and ξ′i obeys identical distribution of ξi.
Hence, we have:
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)|
= 1
m1 ∣∣∣ m1∑ i=1 (Eξ∼D1f(A(Dm1 , Dm2); ξ)− f(A(Dm1 , Dm2); ξi)) ∣∣∣
≤ 1 m1 ∣∣∣ m1∑ i=1 ( Eξ∼D1f(A(Dm1 , Dm2); ξ)− Eξ∼D1,ξ′i∼D1f(A(D i m1 , Dm2); ξ) ) ∣∣∣ +
∣∣∣∣∣ 1m1 m1∑ i=1 Eξ′i∼D1Eξ∼D1 [ f(A(Dim1 , Dm2); ξ) ] − f(A(Dim1 , Dm2); ξi) ∣∣∣∣∣ + 1
m1 ∣∣∣∣∣ m1∑ i=1 ( Eξ′i∼D1f(A(D i m1 , Dm2); ξi)− f(A(Dm1 , Dm2); ξi) )∣∣∣∣∣ . It then follows from the definition of uniform stability that
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)|
≤2β + ∣∣∣∣∣ 1m1 m1∑ i=1 Eξ′i∼D1Eξ∼D1 [ f(A(Dim1 , Dm2); ξ) ] − f(A(Dim1 , Dm2 ; ξi)) ∣∣∣∣∣ =2β + 1
m1 ∣∣∣∣∣ m1∑ i=1 hi ∣∣∣∣∣ . Notice that all conditions of 7 hold. Thus, the following outcome can be derived for any p ≥ 2:∥∥∥∥∥ m1∑ i=1 hi(ξ) ∥∥∥∥∥ p ≤ 12 √ 2pm1β ⌈log2 m1⌉+ 4M √ pm1.
Combining Lemma 7 and Lemma 8 with hi defined above, we have the following inequality with probability 1− δ: ∣∣∣∣∣ m1∑ i=1 hi(ξ) ∣∣∣∣∣ ≤ e( 4M√m1 √ log e δ + 12 √ 2β ⌈log2 m1⌉ √ log e δ ) .
The deviation bound now follows immediately:
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)| ≤ 2β + e ( 4M √ m1 √ log e δ + 12 √ 2β ⌈log2 m1⌉ √ log e δ ) .
The proof is completed.
C MAIN PROOF
C.1 APPROXIMATE EXPANSIVITY OF UPDATE RULES
With step size αx and αy , the update rules for single-timescale can be presented:
Gs ([ x y ]) := [ x− αx∇f(x,y) y − αy∇yg(x,y) ] .
Definition 5 (expansivity). An update rule is η-expansive if for every x, x′ ∈ Rd1 , y, y′ ∈ Rd2 : ∥G(x,y)−G (x′,y′)∥2 ≤ η √ ∥x− x′∥22 + ∥y − y′∥ 2 2.
Lemma 9. Suppose that Assumptions 1 and 2 hold for Problem (1). Then:
1. If f and g are non-convex functions, then Gs is (1+max{lfαx, lgαy})-expansive with step size αx, αy .
2. If f and g are convex functions, then Gs is ( √
2 + 2max{(lfαx)2, (lgαy)2})-expansive with step size αx, αy . 3. If f and g are strongly-convex with µf and µg respectively, then Gs is√ 2 (1− 2αx (µf + µg) + αx2l2)-expansive with step size:
(uf + µg)− √ (uf + µg) 2 − 0.5l2
l2 ≤ αx = αy
≤ min 1µf + µg , (uf + µg) + √ (uf + µg) 2 − 0.5l2 l2 . Proof. In Case 1 with the NC-NC objectives and the smoothness of objectives on Assumptions 1 and 2, we have∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥ = ∥∥∥∥[ x− x′ − αx (∇f(x,y)−∇f (x′,y′))y − y′ + αy (∇yg(x,y)−∇yg (x′,y′)) ]∥∥∥∥
≤ ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥+ ∥∥∥∥[ αx (∇f(x,y)−∇f (x′,y′))αy (∇yg(x,y)−∇yg (x′,y′)) ]∥∥∥∥
≤ (1 + max{lfαx, lgαy}) ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ . In case 2, with the monotonicity of the convex objective’s gradient, we have:
⟨x− x′, αx (∇f(x,y)−∇f (x′,y))⟩ ≥ 0 ⟨y − y′, αy (∇yg(x′,y)−∇yg (x′,y′))⟩ ≥ 0.
Thus, the stated result then follows:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2 [ x− x′y − y ]T [ αx (∇f(x,y)−∇f (x′,y)) αy (∇yg(x,y)−∇yg (x′,y)) ] ] + ∥∥∥∥[ αx (∇f(x,y)−∇f (x′,y))αy (∇yg(x,y)−∇yg (x′,y)) ]∥∥∥∥2
≤ max{(lfαx)2, (lgαy)2} ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + ∥x− x′∥2 . (4)
and∥∥∥∥Gs([ x′y ]) −Gs ([ x′ y′ ])∥∥∥∥2 = ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥2 − 2 [ x′ − x′y − y′ ]T [ αx (∇f(x′,y′)−∇f (x′,y)) αy (∇yg(x′,y′)−∇yg (x′,y)) ] ] + ∥∥∥∥[ αx (∇f(x′,y)−∇f (x′,y′))αy (∇yg(x′,y)−∇yg (x′,y′)) ]∥∥∥∥2
≤ max{(lfαx)2, (lgαy)2} ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥2 + ∥y − y′∥2 . (5)
Combining the above equations 6, 7 and inequality ( ∑k
i=1 ak) 2 ≤ k ∑k i=1 a 2 k, we can derive the
expansive of update rule Gs under convexity condition:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥2 ≤ (2 + 2max{(lfαx)2, (lgαy)2})∥∥∥∥[ x− x′y − y′ ]∥∥∥∥2 .
If f and g are strongly-convex, then, f̃(x,y) = f(x,y)− µf2 (∥x∥ 2 +∥y∥2) and g̃(x,y) = g(x,y)− µg 2 (∥x∥
2 + ∥y∥2) will be convex. With the above conclusions, we can derive the following:∥∥∥∥GT ([ xy ]) −Gs ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2αx [ x− x′y − y ]T [ (∇f(x,y)−∇f (x′,y)) (∇yg(x,y)−∇yg (x′,y))
] + αx 2 ∥∥∥∥[ (∇f(x,y)−∇f (x′,y))(∇yg(x,y)−∇yg (x′,y)) ]∥∥∥∥2
= (1− (αxµf + αxµg))2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + αx2 ∥∥∥∥[ (∇f̃(x,y)−∇f̃ (x′,y))(∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥2
− 2 (1− αxµf − αxµg)αx [ x− x′ y − y ]T [ (∇f̃(x,y)−∇f̃ (x′,y)) (∇yg̃(x,y)−∇yg̃ (x′,y)) ] ≤ ( 1− 2αx (µf + µg) + αx2l2 ) ∥x− x′∥2 .
The penultimate inequality arises from the smoothness of f̃ , g̃, which is based on our assumption for simplicity that l = max{lf , lg}, and the details will be revealed as follows:
l2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 ≥ ∥∥∥∥[ ∇f(x,y)−∇f (x′,y)∇yg(x,y)−∇yg (x′,y) ]∥∥∥∥2
= ∥∥∥∥∥ [ ( ∇f̃(x,y)−∇f̃ (x′,y) ) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2
+ 2 (µf + µg) [ x− x′ y − y ]T [ (∇f̃(x,y)−∇f̃ (x′,y)) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]
≥ ∥∥∥∥∥ [ ( ∇f̃(x,y)−∇f̃ (x′,y) ) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 .
Similar to the convex case, we can have:∥∥∥∥GT ([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥2 ≤ 2 (1− 2αx (µf + µg) + αx2l2) ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥2 .
C.2 SINGLE TIMESCALE
We first introduce the following lemma before providing the proof of the Theorem.
Lemma 10 (Hardt et al. (2016)). Consider two sequences of updates G1s, ..., GKs and
(G1s) ′, ..., (GKs ) ′ with initial points x0 = x′0, y0 = y ′ 0. Define δk = √ ∥xk − x′k∥ 2 + ∥yk − y′k∥
2. Then, we have:
δk+1 ≤ ηδk if Gks = (G k s) ′ is η-expansive min(η, 1)δk + 2σ if sup ∥∥∥∥[ xy ] −G ([ x y ])∥∥∥∥ ≤ σ Gks is η expansive
Proof. The first part of the inequality is obvious from the definition of expansivity and the assumption of Gks = (G k s) ′. For the second bound, note that:
δk+1 = ∥∥∥∥Gs([ xkyk ]) −G′s ([ x′k y′k ])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) − [ xk yk ] + [ x′k y′k ] −G′s ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ xk − x′kyk − y′k ]∥∥∥∥
≤ δk + ∥∥∥∥Gs([ xkyk ]) − [ xk yk ]∥∥∥∥+ ∥∥∥∥G′s([ x′ky′k ]) − [ x′k y′k ]∥∥∥∥ ≤ δk + 2σ.
Also, δk+1 can be further expressed as:
δk+1 = ∥∥∥∥Gs([ xkyk ]) −G′s ([ x′k y′k ])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) −Gs ([ x′k y′k ]) +Gs ([ x′k y′k ]) −G′s ([ x′k y′k
])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) −Gs ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ x′ky′k ] −Gs ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ x′ky′k ] −G′s ([ x′k yk′
])∥∥∥∥ ≤ ηδk + 2σ.
Combining the above completes the proof of the Lemma 10.
Now, we are ready to prove Theorem 2:
Proof of Part(a). Suppose that Dm1 and D ′ m1 are two neighboring sets differing only in one sample. Consider the updates G1s, ..., G K s and (G 1 s) ′, ..., (GKs ) ′. We can observe that the example chosen by the algorithm is the same in Dm1 , D ′ m1 at step k with probability 1−1/m1 and different with proba-
bility 1/m1. In the former case, we have identical update rules, while √ 1− 2αx (µf + µg) + α2xl2-
expansive can be employed in the latter through lemma 10. E [δk+1] ≤ ( 1− 1
m1
)( 2 ( 1− 2αx (µf + µg) + α2xl2 ))1/2 E [δk] + 1 m1 E [δk] + 1 m1 2 √ (αxLf )2 + (αxLg)2
≤ ( 2 ( 1− 2αx (µf + µg) + α2xl2 ))1/2 E [δk] + 2 m1 √ (αxLf ) 2 + (αxLg) 2
≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
k∑ i=0 ( 2 ( 1− 2αx (µy + µg) + α2xl2 ))i/2 ≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
∞∑ i=0 ( 2 ( 1− 2αx (µf + µg) + α2xl2 ))i/2 (1) ≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
∞∑ i=0 ( 1− 2αx (µf + µg) + α2xl2 + 0.5 )i = √ (αxLf )2 + (αxLg)2
m1 ( αx (µf + µg)− α 2 xl 2 2 + 0.25 )
=
√ L2f + L 2 g
m1 (µf + µg − (αxl)2/2 + 0.25) .
Here (1) comes from the mean equality √ ab ≤ (a + b)/2 for any a, b ≥ 0 and the assumption of (uf+µg)− √ (uf+µg) 2−0.5l2
l2 ≤ αx ≤ (uf+µg)+
√ (uf+µg)
2−0.5l2 l2 , which finishes the proof.
Proof of Part(b). The proof of Part(b) is analogous to the above, thus we use the same notations for this part.
E [δk+1] ≤ ( 1− 1
m1
)( 2 + 2max { l2fα 2 x, l 2 yα 2 y })1/2 E [δk] + 1 m1 E [δk] + 2 m1 √ L2fα 2 x + L 2 gα 2 y
= ( 2 + 2max { l2fα 2 x, l 2 gα 2 y })1/2 E [δk] + 2 √ L2fα 2 x + L 2 gα 2 y
m1 E [δk] ≤ 2 √ L2fα 2 x + L 2 gα 2 y
m1 ·
( 2 + 2max { l2fα 2 x, l 2 gα 2 y }) k+1 2 − 1√
2 + 2max { l2fα 2 x, l 2 gα 2 y } − 1
E [δk] ≤ O √ L2fα 2 x + L 2 gα 2 y ( 2 + 2max { l2fα 2 x, l 2 gα 2 y }) k+1 2
m1
.
To prove stability in the NC-NC case, we introduce the following lemma:
Lemma 11 (Hardt et al. (2016)). Assume that f(x,y; ξ) is Lf -Lipschitz continuous and 0 ≤ f(x,y; ξ) ≤ 1. Let Dm1 and D′m1 be two datasets differing in only one sample. Denote (xK ,yK) and (x′K ,y ′ K) as the output of K steps of SSGD (single-timescale algorithm) on Dm1
and D′m1 , respectively. Then, the following holds for every k ∈ {0, 1, ...,K}, where δk =√ ∥xk − x′k∥ 2 + ∥yk − y′k∥ 2:
E [|f (xk,yk; ξ)− f (x′k,y′k; ξ)|] ≤ k0 m1 + LfE [δk | δk0 = 0] .
Proof of Part(c). Applying Lemma 11, we get ready to prove the NC-NC case. Analogous to the previous case, we have:
E [δk+1] ≤ ( 1− 1
m1
)( 1 + cl
k
) E [δk] + 1
m
( 1 + cl
k
) E [δk] +
2c √ l2f + l 2 g
k
= ( 1 + cl
k
) E [δk] +
2c √ l2f + l 2 g
m1k .
The following can be derived:
E [δK | δk0 = 0] ≤ K∑
k=k0+1
T∏ t=k+1 ( 1 + cl t ) 2c√l2f + l2g m1k
≤ K∑
k=k0+1
T∏ t=k+1 { exp ( cl t )} 2c√l2f + l2g m1k
≤ K∑
k=k0+1
exp
( K∑
t=k+1
cl
t
) 2c √ l2f + l 2 g
m1k
≤ k∑
k=k0+1
exp(cl · log(K/k)) 2c √ l2f + l 2 g
m1k ≤ 2c √ l2f + l 2 g
m1
K∑ k=k0+1 k−cl−1
≤ 2 √ l2f + l 2 g
m1l
( K
k0
)cl .
Hence, Lemma 11 indicates:
E [|f(x, y)− f (x′, y′)|] ≤ k0 m1
+ 2Lf
√ l2f + l 2 g
m1l
( K
k0
)cl .
The right hand side is approximately minimized when k0 = ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1 .
Therefore, we have
β ≤ O ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1
m1cl for argument stability.
C.3 TWO-TIMESCALE SGD (TSGD)
C.3.1 STANDARD SETTINGS
With step size αx and αy , the update rule for two-timescale can be presented as:
GT ([ xk yk ]) := [ xk − αx∇f(xk,yTk ) yk − αy ∑T t=1∇yg(xk,ytk) ] .
Analogous to the single-timescale case, we first provide the expansivity of the update rules.
Lemma 12. Suppose that Assumptions 1 and 2 hold for Problem (1). Let αl = max{αxlf , 1+(αylg) 2
1−αylg } for simplicity sake and assume αylg ≤ 1. Then:
1. If f and g are non-convex functions, GT is (1 + αlT )-expansive.
2. If f and g are convex functions, GT is (1 + αl)-expansive with step size αx, αy .
3. If f and g are strongly-convex with µf and µg respectively, GT is 1 + αl-expansive with step size:
αx = αy ≤ 1
µf + µg .
Proof. In Case 1 with the NC-NC objectives by the triangle inequality, we have:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ ∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥+∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ The first item can be derived from:∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥ = ∥∥∥∥[ x− x′ − αx (∇f(x,y)−∇f (x′,y))y − y + αy∑Ty=1 (∇yg(x,yt)−∇yg (x′,yt)) ]∥∥∥∥
≤ (1 + αyT lg) ∥x− x′∥
The second item can be derived from:∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ = ∥∥∥∥∥ [ x′ − x′ − αx (∇f(x′,y)−∇f (x′,y′)) y − y′ + ∑T−1 t=0 αy ( ∇yg(x′,yt)−∇yg ( x′,yt ′ )) ]∥∥∥∥∥
≤ ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥+ ∥∥∥∥∥ [ αx (∇f(x′,y)−∇f (x′,y′))∑T−1 t=0 αy ( ∇yg(x′,yt)−∇yg ( x′,yt ′ )) ]∥∥∥∥∥
From the Lipschitz continuous, we have:
T−1∑ t=0 αy ( ∇yg ( x,yt ) −∇yg ( x,yt )) ≤ T−1∑ t=0 αylg ∥∥yt − yt∥∥
Now we consider the t-th update: αylg ∥∥yt − yt∥∥ = αylg ∥∥yt−1 − αy∇yg (x′,yt−1)− yt−1 + αy∇yg (x′,yt−1)∥∥
≤ αylg ∥∥∥yt−1 − (yt−1)′∥∥∥+ (αylg)2 ∥∥∥yt−1 − (yt−1)′∥∥∥
· · · ≤ (αylg)t ∥∥∥y0 − (y0)′∥∥∥+ (αylg)t+1 ∥∥∥y0 − (y0)′∥∥∥
According to the accumulation of the both side, we have: T−1∑ t=0 αylg ∥∥∥yt − (yt)′∥∥∥ ≤ αylg ∥∥∥y0 − (y0)′∥∥∥ ∥ T−1∑ t=1 [ (αylg) t ∥∥∥y0 − (y0)′∥∥∥+ (αylg)t+1 ∥∥∥y0 − (y0)′]∥∥∥
=
[ 1− (αylg)T
1− αylg +
(αylg) 2 − (αylg)T+1
1− αylg ]∥∥∥y0 − (y0)′∥∥∥ = [ 1− (αylg)T + (αylg)2 − (αylg)T+1
1− αylg ]∥∥∥y0 − (y0)′∥∥∥ ≤ 1 + (αylg) 2
1− αylg ∥∥y − (y)′∥∥
Let αl = max{αylg, 1+(αylg) 2
1−αylg }, then:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ (1 + Tαl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
In case 2, with the monotonicity of the convex objective’s gradient, we have:
⟨x− x′, αx (∇f(x,y)−∇f (x′,y))⟩ ≥ 0 ⟨y − y′, αy (∇yg(x′,y)−∇yg (x′,y′))⟩ ≥ 0.
Thus, the stated result then follows:∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2 [ x− x′y − y ]T [ αx (∇f(x,y)−∇f (x′,y))∑T−1 t=0 αy ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]
+ ∥∥∥∥∥ [ αx (∇f(x,y)−∇f (x′,y))∑T−1 t=0 αy ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
≤ max (lfαx)2, ( 1 + (αylg) 2
1− αylg
)2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + ∥x− x′∥2 . (6)
and the second decomposition can be obtained by the NC-NC case:∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ ( 1 + max{lfαx, 1 + (αylg) 2
1− αylg }
) ∥y − y′∥ . (7)
let αl = max{αxlf , 1+(αylg) 2 1−αylg }. Combining the above equations 6, 7 and inequality √ 1 + (αl)2 ≤ (1 + αl)2, then we can derive the expansive of update rule GT under convexity condition:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ (1 + αl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
If f and g are strongly-convex, then, f̃(x,y) = f(x,y)− µf2 (∥x∥ 2 +∥y∥2) and g̃(x,y) = g(x,y)− µg 2 (∥x∥ 2 + ∥y∥2) will be convex. Let αx = αy = α and denote αl = max{αxlf , 1+(αylg) 2
1−αylg }, we can derive the following with the conclusions from the convex case:∥∥∥∥GT ([ xy ]) −GT ([ x′ y
])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2αx [ x− x′y − y ]T [ (∇f(x,y)−∇f (x′,y))∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]
+ αx 2 ∥∥∥∥∥ [ (∇f(x,y)−∇f (x′,y))∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
= (1− (αxµf + αxµg))2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + αx2 ∥∥∥∥∥ [ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ]∥∥∥∥∥ 2
− 2 (1− αxµf − αxµg)αx [ x− x′ y − y ]T (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ ))
≤ ( 1− 2α (µf + µg) + α2l2 ) ∥x− x′∥2 .
The penultimate inequality arises from the smoothness of f̃ , g̃, which is based on our assumption for simplicity that l = max{lf , 1+(αylg) 2
(1−αylg)αy }, and the details will be revealed as follows:
l2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 ≥ ∥∥∥∥∥ [ ∇f(x,y)−∇f (x′,y)∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
= ∥∥∥∥∥∥ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1
t=0
( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ∥∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2
+ 2 (µf + µg) [ x− x′ y − y ]T (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ ))
≥ ∥∥∥∥∥∥ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1
t=0
( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ∥∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 .
Similar to the convex case, we can have:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥ ≤ (1 + αl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
Proof. Because the main proof of Lemma 12 is similar to that of Lemma 9, we omit it.
Next, we give a bound for the update rule GT and prepare to prove Theorem 3. Since g() is a lg-smooth function, we have:
g ( x,yt+1 ) ≤ g ( x,yt ) + 〈 ∇g ( x,yt ) ,yt+1 − yt 〉 +
lg 2 ∥∥yt+1 − yt∥∥2 . ≤ g ( x,yt ) − 〈 ∇g ( x,yt ) , αy∇g ( x,yt )〉 +
lg 2 ∥∥αy∇g (x,yt)∥∥2 ≤ g ( x,yt ) − αy ( 1− αylg
2 )∥∥∇g (x,yt)∥∥2 . The two sides are accumulated from t = 1 to t = T and we could derive the following by Cauchy–Schwarz inequality:
T∑ t=1 ∥∥∇g (x,yt)∥∥2 ≤ g (x,y1)− g (x,yT ) αy (2− αylg) ⇒
( T∑
i=1
∇g(x,yt) )2 ≤ T
T∑ i=1 ∇g2(x,yt)
≤ T (g (x,y1)− g (x,yT )) αy(2− αylg) .
Hence, the bound of GT equals to √ L2fα 2 x + ( T (g(x,y1)−g(x,yT ))
αy(2−αylg)
)2 . Now, we are ready to give the
proof of Theorem 3.
Proof of Part(a). Suppose that Dm1 and D ′ m1 are two neighboring sets differing in only one sample. Consider the updates G1T , ..., G | 1. What is the focus of the paper regarding bilevel optimization problems?
2. What are the strengths and weaknesses of the proposed approach in addressing generalization issues?
3. Do you have any concerns or suggestions regarding the expression used for "generalization error"?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any additional insights that the paper could provide to explain practical phenomena and design effective learning algorithms?
6. How does the reviewer view the limitation of the previous work compared to the current approach?
7. Can the authors provide more explanations and discussions on dealing with cases where the inner variable depends on the outer variable during updates? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the generalization of bilevel optimization problems which are widely used in machine learning, e.g., meta-learning, hyper-parameter optimization, and reinforcement learning. Specifically, the authors conduct a thorough analysis of the generalization of first-order (gradient-based) methods for the bilevel optimization problem. Technically, in comparison with the previous work, the (advanced) algorithmic stability tool is used to give a high probability generalization bound which improves the previous best one
(
n
)
to
log
(
n
)
, where
n
is the sample size. Besides, some particular settings (e.g., strongly-convex-strongly-convex (SC-SC)) are also involved. Finally, experimental results are provided to support the theoretical results.
Strengths And Weaknesses
Strength
Overall, this paper is very well written.
This paper is sound since the claims are supported by formal theoretical results. Besides, experimental results are also provided to corroborate the theory.
The proofs seem reasonable and right although I have not checked them line-by-line.
It clearly discusses the differences with related work, especially the previous work[1*].
Weaknesses
The expression about "generalization error" is not suitable. In this paper, the authors define the bilevel generalization error as the difference between the population risk and empirical risk. It is strange because in statistical learning theory, the generalization error usually refers to the population risk and the related expression in this paper can be confusing. I suggest the authors use the generalization gap, just as in previous work[1*].
While this paper refines the generalization analyses over previous work[1*], few additional insights can be offered to explain the practical phenomena and design effective learning algorithms in comparison with previous work[1*].
I do not agree with the expression about the limitation of the previous work[1*] (mainly in Remark 4). [1*] made an assumption that the update of
y
in the inner level after the reinitialization will not be affected by the value specified for
x
, which may have a gap with the practical algorithms. But, the main reason is to decouple the variables
x
and
y
, and the proof is technically right.
As discussed in 3, how the authors deal with the case when updating the outer variable
x
but the inner variable
y
can be dependent on the
x
and the hyper-gradient is used. Please give more explanations and discussions.
[1*]Fan Bao, Guoqiang Wu, Chongxuan Li, Jun Zhu, and Bo Zhang. Stability and generalization of bilevel programming in hyperparameter optimization. NeurIPS 2021
Clarity, Quality, Novelty And Reproducibility
Please see the section 'Clarity, Quality, Novelty And Reproducibility'. |
ICLR | Title
On Stability and Generalization of Bilevel Optimization Problems
Abstract
(Stochastic) bilevel optimization is a frequently encountered problem in machine learning with a wide range of applications such as meta-learning, hyper-parameter optimization, and reinforcement learning. Most of the existing studies on this problem only focused on analyzing the convergence or improving the convergence rate, while little effort has been devoted to understanding its generalization behaviors. In this paper, we conduct a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization problem. We first establish a fundamental connection between algorithmic stability and generalization gap in different forms and give a high probability generalization bound which improves the previous best one from O( √ n) to O(log n), where n is the sample size. We then provide the first stability bounds for the general case where both inner and outer level parameters are subject to continuous update, while existing work allows only the outer level parameter to be updated. Our analysis can be applied in various standard settings such as strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC). Our analysis for the NC-NC setting can also be extended to a particular nonconvex-stronglyconvex (NC-SC) setting that is commonly encountered in practice. Finally, we corroborate our theoretical analysis and demonstrate how iterations can affect the generalization gap by experiments on meta-learning and hyper-parameter optimization.
N/A
√ n) to O(log n), where n is the
sample size. We then provide the first stability bounds for the general case where both inner and outer level parameters are subject to continuous update, while existing work allows only the outer level parameter to be updated. Our analysis can be applied in various standard settings such as strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC). Our analysis for the NC-NC setting can also be extended to a particular nonconvex-stronglyconvex (NC-SC) setting that is commonly encountered in practice. Finally, we corroborate our theoretical analysis and demonstrate how iterations can affect the generalization gap by experiments on meta-learning and hyper-parameter optimization.
1 INTRODUCTION
(Stochastic) bilevel optimization is a widely confronted problem in machine learning with various applications such as meta-learning (Finn et al., 2017; Bertinetto et al., 2018; Rajeswaran et al., 2019), hyper-parameter optimization (Franceschi et al., 2018; Shaban et al., 2019; Baydin et al., 2017; Bergstra et al., 2011; Luketina et al., 2016), reinforcement learning (Hong et al., 2020), and few-shot learning (Koch et al., 2015; Santoro et al., 2016; Vinyals et al., 2016). The basic form of this problem can be defined as follows
min x∈Rd1
R(x) = F (x,y∗(x)) := Eξ [f (x,y∗(x); ξ)]
s.t. y∗(x) = arg min y∈Rd2
{G(x,y) := Eζ[g(x,y; ζ)]} , (1)
where f : Rd1 × Rd2 → R and g : Rd1 × Rd2 → R are two continuously differentiable loss functions with respect to x and y. Problem (1) has an optimization hierarchy of two levels, where the outer-level objective function f depends on the minimizer of the inner-level objective function g.
Due to its importance, the above bilevel optimization problem has received considerable attention in recent years. A natural way to solve problem (1) is to apply alternating stochastic gradient updates with approximating ∇yg(x,y) and ∇f(x,y), respectively. Briefly speaking, previous efforts mainly examined two types of methods to perceive an approximate solution that is close to the optimum y∗(x). One is to utilize the single-timescale strategy (Chen et al., 2021; Guo et al., 2021; Khanduri et al., 2021; Hu et al., 2022), where the updates for y and x are carried out simultaneously. The other one is to apply the two-timescale strategy (Ghadimi & Wang, 2018; Ji et al., 2021;
Hong et al., 2020; Pedregosa, 2016), where the update of y is repeated multiple times to achieve a more accurate approximation before conducting the update of x.
While there is a long list of work on bilevel optimization, most of the existing work only focuses on either analyzing its convergence behaviors (Ghadimi & Wang, 2018; Hong et al., 2020; Ji et al., 2021) or improving its convergence rate, based on the convexity and the smoothness properties of f(·, ·) and/or g(·, ·) (Liu et al., 2020; Li et al., 2020). Contrarily, only little effort is devoted to understanding the generalization behavior of the problem. To the best of our knowledge, there is only one recent work on the generalization analysis for bilevel problems (Bao et al., 2021), which presents the first expected uniform stability bound. However, there are still several undesirable issues in this work: (1) Their result is only for the uniform stability (which could be deduced from argument stability with certain conditions, see Definition 4 for details), leaving the analysis of other stronger definitions of algorithmic stability open; (2) Additionally, the UD algorithm allows the outer level parameters to be updated continuously but needs to reinitialize the inner level parameters before each iteration in the inner loop, which is not commonly used in practice due to their inefficiency (see line 4 in Algorithm 3). (3) The proof of Theorem 2 in their work is unclear to show whether the update of outer level parameters is argument dependent on the inner level parameters, where may exist some gap in the analysis of UD algorithm (see Appendix E for detailed discussions). (4)Their experiments take only hyper-parameter optimization into consideration and neglect other applications in the bilevel optimization instances.
To address all the aforementioned issues, we give in this paper a thorough analysis on the generalization behaviors of first-order (gradient-based) methods for general bilevel optimization problem. We employ the recent advances of algorithmic stability to investigate the generalization behaviors in different settings. Specifically, our main contributions can be summarized as follows:
• Firstly, we establish a fundamental connection between generalization gap and different notations of algorithmic stability (argument stability and uniform stability) for any randomized bilevel optimization algorithms in both expectation and high probability forms. Specifically, we show that the high probability form of the generalization gap bound can be improved from O( √ n) to O(log n) compared with the result in Bao et al. (2021).
• Next, we present the stability bounds for gradient-based methods with either singletimescale or two-timescale update strategy under different standard settings. To the best of our knowledge, this work provides the first stability bounds for the two-timescale (double loop) algorithms, which allows the accumulation of the sub-sampled gradients in the inner level. In detail, we consider the settings of strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC), and further extend our analysis to a particular nonconvex-strongly-convex (NC-SC) setting that is widely appeared in practice. Table 1 is the summary of our main results.
• Thirdly, we provide the first generalization bounds for the case where both the outer and inner level parameters are subject to continuous (iterative) changes. Compared to the previous work (Bao et al., 2021), our work does not need the reinitialization step before each iteration in the inner level and hence our algorithm can carry over the last updated inner level parameters, which is more general and practical.
• Finally, we conduct empirical studies to corroborate our theories via meta-learning and hyperparameter optimization, which are two applications of bilevel optimization.
Due to space limitations, all the proofs and additional experiments are included in Appendix.
1.1 RELATED WORK
Research at the interface between generalization and the bilevel problem can be roughly classified into two categories. The first one includes all the research on bilevel optimization. In recent decades, extensive studies have been done on this topic, which suggests that bilevel optimization has a wide range of applications in machine learning such as hyper-parameter optimization (Franceschi et al., 2018; Lorraine & Duvenaud, 2018; Okuno et al., 2021), meta learning (Bertinetto et al., 2018; Rajeswaran et al., 2019; Soh et al., 2020) and reinforcement learning (Yang et al., 2018; Tschiatschek et al., 2019). Most of the existing work studies the problem from an optimization perspective. For example, Ghadimi & Wang (2018); Ji et al. (2021) provide the convergence rate analysis based on the nonconvex-strongly-convex assumption for the two functions f(·, ·) and g(·, ·). (Grazzi et al., 2020) considers the iteration complexity for hypergradient computation. (Liu et al., 2020; Li et al., 2020) present an asymptotic analysis for the convex-strongly-convex setting. Perhaps the most related one to ours from the generalization standpoint (i.e., the expectation of population risk and empirical risk) is Bao et al. (2021), while there may exist some gap in the analysis of UD algorithm. In this work, we employ a novel approach to examine the stability bounds of bilevel optimization problems. Firstly, our work analyzes the generalization behavior by observing how different settings can have an impact on the stability bounds directly. Secondly, our work adopts a stronger version of stability called argument stability, which can imply the previously used uniform stability if the function is sufficiently smooth. Furthermore, our work does not need to reinitialize the inner-level parameters and allows them to carry over their last updated parameters at each time updating the inner level. This indicates that y in the inner level is updated iteratively and depends on the current parameter of x, which is more common and efficient in practice.
The second category includes all the work on stability analysis. There is a long list of research on stability and generalization (Bousquet & Elisseeff, 2002; Mukherjee et al., 2006; Shalev-Shwartz et al., 2010). Bousquet & Elisseeff (2002) first introduces the notion of uniform stability and establishes the first framework of stability analysis. Hardt et al. (2016) later extends the stability analysis to iterative algorithms based on stochastic gradient methods for the vanilla stochastic optimization. After that, there are subsequent studies on generalization analysis for various problems via algorithmic stability, such as minmax problems (Lei et al., 2021; Farnia & Ozdaglar, 2021; Zhang et al., 2021) and pairwise learning (Yang et al., 2021; Lei et al., 2020; Xue et al., 2021; Huai et al., 2020). However, it is notable that due to the additional stochastic function in the constraint in the bilevel optimization, all the previous techniques and results cannot be applied to our problem. Although the generalization analysis of minmax optimization is somewhat similar to ours, it involves only one objective function f and a single level in algorithms for typical minmax optimization problems, while in the bilevel optimization algorithms there is an inner level and an outer level, which is considerably more challenging.
2 PRELIMINARIES
2.1 DEFINITIONS AND ASSUMPTIONS
In the following, we give some necessary definitions and assumptions that are widely used in bilevel optimization (Ghadimi & Wang, 2018; Ji et al., 2021; Khanduri et al., 2021) and generalization analysis (Hardt et al., 2016; Lei et al., 2021). Definition 1 (Joint Lipschitz Continuity). A function f(x,y) is jointly L-Lipschitz over Rd1 × Rd2 , if for all x ∈ Rd1 ,y ∈ Rd2 , the following holds, |f(x,y) − f(x′,y′)| ≤ L √ ∥x− x′∥22 + ∥y − y′∥ 2 2.
Definition 2 (Smoothness). A function f is l-smooth over a set S if for all u,w ∈ S the following is true, ∥∇f(u)−∇f(w)∥ ≤ l∥u− w∥. Definition 3 (Strong Convexity). A function f is µ-strongly-convex over a set S, if for all u,w ∈ S, the following holds, f(u) + ⟨∇f(u), w − u⟩+ µ2 ∥w − u∥
2 ≤ f(w). Assumption 1 (Inner-level Function Assumption). We assume the inner stochastic function g(x,y) in (1) satisfies the following: (i) g(x,y) is jointly Lg-Lipschitz for any x ∈ Rd1 and y ∈ Rd2 . (ii) g(x,y) is continuously differentiable and lg-smooth for any (x,y) ∈ Rd1 × Rd2 .
Assumption 2 (Outer-level Function Assumption). We assume the outer stochastic function f(x,y) in (1) satisfies the following: (iii) f(x,y) is jointly Lf -Lipschitz for any x ∈ Rd1 and y ∈ Rd2 . (iv) f(x,y) is continuously differentiable and lf -smooth for any (x,y) ∈ Rd1 × Rd2 .
2.2 PROBLEM FORMULATION
Given two distributions D1 and D2, in the (stochastic) optimization problem we aim to find the minimizer of Problem (1). However, since the distributions are often unknown, in practice we only have two finite-size datasets Dm1 = {ξi | i = 1, ...,m1} ∼ D m1 1 and Dm2 = {ζi | i = 1, ...,m2} ∼ Dm22 , where each ξi and ζi are i.i.d. sampled from D1 and D2, respectively. Based on these datasets, we will design some (randomized) algorithm A with output A(Dm1 , Dm2) = (x,y) ∈ Rd1 ×Rd2 . Our goal is to investigate the generalization behavior of such output. Note that although there are two stochastic functions in the bilevel optimization problem, we only care about the generalization of the outer-level one since it is the one that we prefer to minimize.
Below we define the generalization gap to measure the generalization behavior. Given distribution D1 and a finite data Dm1 ∼ D m1 1 , the population risk function R(x,y,D1) of x, y on D1 is defined as R(x,y,D1) := Eξ∼D1 [f (x,y(x); ξ)], and its empirical risk function on Dm1 is Rs(x,y, Dm1) = 1 m1 ∑m1 i=1 [f (x,y(x); ξi)]. Moreover, for a fixed hyperparameter x ∈ Rd1 and y(x) ∈ Rd2 ( note that y(x) might be dependent on x), we define the difference between the population risk and the empirical risk over (x,y(x)) as the bilevel generalization gap of (x,y(x)): Es[R(x,y) − Rs(x,y)], where Es denotes the expectation of Dm1 ∼ D m1 1 . When there is no ambiguity, we simplify thereafter the notations as follows: R(x,y,D1) = R(x,y) and Rs(x,y, Dm1) = Rs(x,y). Our goal is thus to analyze the bilevel generalization gap of the output of algorithm A(Dm1 , Dm2) based on Dm1 and Dm2 . Since the generalized error depends on the algorithm itself, in the following we will introduce the algorithms to be considered in this paper.
Most of the existing algorithms adopt the following idea: first approximate y∗ on Dm2 for a given parameter x in the inner level and then seek the hyperparameter x∗(Dm1 , Dm2) with corresponding hypothesis y∗(x∗(Dm1 , Dm2), Dm2) by the below estimation:
x̂(Dm1 , Dm2) ≈ argminx Rs(x, ŷ(x, Dm2), Dm1),
where ŷ(x, Dm2) ≈ argminy Gs(x,y, Dm2), (2)
where Gs(x,y, Dm2) is the empirical risk of G(x,y) over Dm2 , i.e., G(x,y, Dm2) = 1 m2 ∑m2 i=1 g (x,y(x); ζi). Most of the current gradient-based (first-order) algorithms for approximating (2) can be categorized into two classes: single-timescale methods and two-timescale methods. The single-timescale method performs the updates for y and x simultaneously via stochastic gradient descent (SGD), while the two-timescale method updates y multiple times before updating x (via stochastic gradient descent). As there are numerous approaches for both classes (see Related Work section for details), in this paper we will analyze the generalization behaviors for the most classical and standard one in each class, i.e., single-timescale SGD (SSGD; Algorithm 1) and twotimescale SGD (TSGD; Algorithm 2). There is a long list of work (Chen et al., 2021), (Ghadimi & Wang, 2018; Ji et al., 2021) based on either SSGD or TSGD.
3 GENERALIZATION AND STABILITY FOR BILEVEL OPTIMIZATION
Algorithmic stability is one of the classical approaches to analyzing the generalization bound for algorithms. Roughly speaking, the algorithmic stability of (randomized) algorithm A measures how the output of algorithm A changes if we change one data sample in the input dataset. While there are various notions of stability, most of the existing work on analyzing the stability of stochastic optimization, pairwise learning and minimax optimization focuses on the uniform-stability (Bousquet & Elisseeff, 2002) and the argument-stability (Liu et al., 2017; Lei & Ying, 2020). Thus, we also adopt these two notions of stability for the bilevel optimization problem. Briefly speaking, uniformstability focuses on the resulting change in population risk function, while the argument-stability considers the resulting change in arguments, i.e., the output of the algorithm. Definition 4 (Algorithmic Stability). Let A : Dm11 ×D m2 2 7→ Rd1×Rd2 be a randomized algorithm.
Algorithm 1 Single-timescale SGD (SSGD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0, Datasets Dm1 and Dm2
2: Output: xK , yK 3: for k = 0 to K − 1 do 4: Uniformly sample i ∈ [m2], j ∈ [m1] 5: yk+1 = yk − αy∇yg(xk,yk(xk); ζi) 6: xk+1 = xk − αx∇f(xk,yk(xk); ξj) 7: end for 8: return xK and yK
Algorithm 2 Two-timescale SGD (TSGD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0 2: Output: xK , yK 3: for k = 0 to K − 1 do 4: y0k ← yTk−1 5: for t = 0 to T − 1 do 6: Uniformly sample i ∈ [m2] 7: yt+1k = y t k − αy∇yg(xk,ytk(xk); ζi) 8: end for 9: Uniformly sample j ∈ [m1]
10: xk+1 = xk − αx∇f(xk,yTk (xk); ξj) 11: end for 12: return xK , yTK
(a) A is β-uniformly-stable if for all datasets Dm1 , D ′ m1 ∼ D m1 1 and Dm2 ∼ D m2 2 such that
Dm1 and D ′ m1 differ in at most one sample, we have the following for any ξ ∼ D1:
EA[|f(A(Dm1 , Dm2), ξ)− f(A(D′m1 , Dm2), ξ)|] ≤ β.
A is β-uniformly-stable with probability at least 1 − δ if we have the following for any ξ ∼ D1 with probability at least 1− δ:∣∣f(A(Dm1 , Dm2), ξ)− f(A(D′m1 , Dm2), ξ)∣∣ ≤ β.
(b) A is β-argument-stable in expectation if for all datasets Dm1 , D ′ m1 ∼ D m1 1 and Dm2 ∼
Dm22 such that Dm1 and D′m1 differ in at most one sample, we have:
EA[∥A(Dm1 , Dm2)−A(D′m1 , Dm2)∥2] ≤ β.
Note that the definition of uniform stability in expectation is the same as the definition in (Bao et al., 2021). Thus, our other definitions can be considered as extensions of the previous stability for bilevel optimization. In the following, we present Theorem 1 as our first result, which shows a crucial relationship between generalization gap and algorithmic stability for an algorithm A. Theorem 1. Let A : ξm1 × ζm2 7→ Rd1 × Rd2 be a randomized BO algorithm.
(a) If A is β-uniform-stable in expectation, then the following holds for Dm1 ∼ D m1 1 , Dm2 ∼
Dm22 : EA,Dm1 [R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))] ≤ β.
(b) If A is β-argument-stable in expectation and Assumption 2 holds, then the following holds for Dm1 ∼ D m1 1 , Dm2 ∼ D m2 2 :
EA,Dm1 [R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))] ≤ Lfβ.
(c) Assume that |f(x,y; ξ)| ≤ M for some M ≥ 0. If A is β-uniform-stable almost surely, then for Dm1 ∼ D m1 1 , Dm2 ∼ D m2 2 , the following holds with probability 1− δ:
|R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))| ≤ 2β + e (
4M √ m1
√ log e
δ + 12 √ 2β⌈log2 m1⌉
√ log e
δ ) where e is the base of the natural logarithms.
Remark 1. The above theorem suggests that the generalization gap can be controlled by several notions of algorithmic stability. Part (a) and Part (b) show that the expectation of generalization gap can be bounded by uniform stability and argument stability with the Lipschitz constant, respectively; Part (c) indicates that the generalization gap for the algorithm is no more than O(β log(m1) +
1/ √ m1) with probability 1 − δ. Compared with the existing work (Bao et al., 2021), Theorem 1 considers argument stability additionally, which is a stronger notion of stability than uniform stability (since uniform stability can be deduced from argument stability with the condition that the function is sufficiently smooth). Moreover, we use the McDiarmid’s inequality and the equivalence of tails and moments for the random variable with a mixture of sub-gaussian and sub-exponential tails (Lemma 1 in Bousquet et al. (2020)), which provide a significantly improved high probability bound in Part (c) (i.e., improving from O(β√m1) in Bao et al. (2021) to O(β logm1)).
4 STABILITY ANALYSIS FOR BILEVEL OPTIMIZATION ALGORITHMS
Motivated by Theorem 1, we can see that to analyze the generalization behaviors for any algorithm, it is sufficient to analyze its stability. As mentioned in the previous Section 2.2, we will consider the stability of SSGD and TSGD. For simplicity we let SC-SC denote the case where f and g both are strongly convex functions. C-C, NC-NC, and NC-SC are also denoted in a similar manner with ”C” representing convex function and ”NC” representing nonconvex function.
4.1 STABILITY BOUNDS FOR SINGLE-TIMESCALE SGD
As we can see from Algorithm 1, SSGD updates y and x simultaneously. In the following we develop stability bounds for this algorithm in different settings. Theorem 2. Suppose that Assumptions 1 and 2 hold and Algorithm A is SSGD with K iterations:
(a) Assume that Problem (1) is SC-SC with strongly convexity parameters µf and µg . Let αx = αy (see Lemma 9 for details) be the step sizes. Denote l = max{lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ((
L2f + L 2 g ) 1 2 ( m1 ( µf + µg − (αxl)2/2 + 0.25 ))−1) .
(b) Assume that Problem (1) is C-C. Let αx, αy be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m−11 √ (αxLf ) 2 + (αyLg) 2 ( 2 + 2max { (αxlf ) 2 , (αylg) 2 })K/2) .
(c) Assume that Problem (1) is NC-NC. Let the step sizes satisfy max {αx, αy} ≤ c/k for some constant c ≥ 0 and l = max {lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ( (m1cl) −1 ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1 ) ,
where lf , lg and Lf , Lg are smoothness constants and Lipschitz constants for f , g, respectively. Remark 2. Note that the above stability bounds are independent of the specific form of the objective function f(·, ·) and the exact form of the sample distribution D1, which are more reliant on the properties of the loss functions and sample size m1, and the stability bounds in the C-C and NC-NC cases are related to the number of iterations additionally. Specifically, Part(a) establishes a stability bound of O(1/m1) in the SC-SC setting and Part(b) considers a C-C case with a stability bound O(κK/21 /m1) related to the number of iterations and the data size, where κ1 is a constant. The NC-NC case is discussed in Part(c) which provides a stability bound ofO(K cl cl+1 /m1), where c is a constant to control the step size and l is the larger smoothness number of lf and lg . The conclusions here match the existing results in minmax problems (Lei et al., 2021; Farnia & Ozdaglar, 2021).
4.2 STABILITY BOUNDS FOR TWO-TIMESCALE SGD
Compared with the above SSGD, Two-timescale SGD (TSGD; Algorithm 2) always achieves more accurate approximate solutions by updating y multiple times before updating x. In this section, we extend our analysis from SSGD to TSGD. Particularly, compared with the results in Bao et al. (2021), we provide stability bounds in Theorem 3 for the case where the inner level parameter
(y) is updated iteratively (i.e., consistency). We further explore in Theorem 4 a particular NC-SC setting, which is commonly appeared in bilevel optimization applications such as meta learning and hyperparameter optimization. Theorem 3. Suppose that Assumptions 1 and 2 hold and |g(·, ·)| ≤ 1. Let A be the TSGD algorithm with K outer-iterations and T inner-iterations. Then we have
(a) Assume that Problem (1) is SC-SC. Let l = max{lf , 1+(αylg) 2
(1−αylg)αy } and α = αx = αy ≤ min{1/lg, 1/(µf + µg)} be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m1 −1 √ L2fα 2 x + ( 2T αy(2− αylg) )2 (1 + αl) K ) .
(b) Assume that Problem (1) is C-C. Let αl = max{αxlf , 1+(αylg) 2
1−αylg } and αx, αy ≤ 1 lg be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m−11 √ L2fα 2 x + ( 2T αy(2− αylg) )2 (1 + αl) K ) .
(c) Assume that Problem (1) is NC-NC. Let the step sizes satisfy max {αx, αy} ≤ c/k for some constant c ≥ 0 and l = max {lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ( (m1Tcl) −1 ( 2cLf √ l2f + T 2l2g ) 1 Tcl+1 ·K Tcl Tcl+1 ) .
Remark 3. Compared with the previous results for SSGD, the stability bounds of TSGD depend on the number of iterations in the outer level loop, the number of iterations in the inner level loop, and the data size in the outer level loop. If the step sizes are sufficiently small, we can see that the bounds in Theorem 3 are asymptotically the same as the bounds of SSGD in Theorem 2. Thus, Theorem 3 can be considered as a generalization of the previous one. The dependence on T also reveals our novelty compared with the existing work of stability analysis for other problems, such as simple SGD and minmax problems. To the best of our knowledge, this work provides the first stability bounds for the two-timescale (double loop) algorithms, which allows the accumulation of the sub-sampled gradients in the inner level. Remark 4. Comparing our results with the ones in (Bao et al., 2021), we have the following observations. 1) They only established the uniform stability bound for the Unrolled Differentiation algorithm 3, where the algorithm is reinitialized at each time entering the inner level loop, indicating that it takes into account the changes to only one parameter in the outer level loop, while our algorithm considers the update for both parameters. 2) Its proof needs to assume that the update of y in the inner level after the reinitialization will not be affected by the value specified for x. However, this assumption is quite uncommon and is probably the reason that they do not need to make any assumption on the inner level objective function (see Appendix E in details). In contrast, our work allows the inner level parameters to be updated consistently (i.e., carrying over the value in the last update), instead of being reinitialized at each time entering the inner level loop. Specifically, we allow yTk to be employed at the beginning of the (k+1)-th outer level iteration, rather than y0. This enables us to obtain different stability bounds for different inner level objective functions from a novel perspective.
In the following, we extend our analysis to a particular NC-SC setting that is frequently encountered in real-world applications and optimization analysis. Theorem 4. Suppose that Assumptions 1 and 2 hold, 0 ≤ f(·, ·) ≤ 1 and Problem (1) is NC-SC. Let A be the TSGD Algorithm with K outer-iterations and T inner-iterations with max {αx, αy} ≤ c/k for constant c ≥ 0. Denote l = max {lf , lg}. Then, A is β-uniform-stable in expectation, where
β ≤ O ( 2cLf √ l2f + l 2 gT 2 ) 1 c(Tl+l−µg)+1 ·K c(Tl+l−µg) c(Tl+l−µg)+1 (T l + l − µg + 2/c)
m1(T l + l − µg)
.
Remark 5. Compared with our previous analysis, we now sketch the technique differences in our analysis. We consider the bound of the term (δx,k, δy,k)T = (∥xk − x′k∥, ∥yk − y′k∥)T , while we employ δk = √ ∥xk − x′k∥ 2 2 + ∥yk − yk′∥22 in the previous analysis, where (xk,yk), (x′k,y ′ k) are the outputs of TSGD after k iterations for Dm1 and D ′ m1 respectively with Dm1 and D′m1 differing in one sample. In the NC-SC setting, we show that (δx,k+1, δy,k+1) T ≤ ((1 + αxl)δx,k, (1 + αxT l)δy,k) T (≤ means the entry-wise inequality), which means our term can be controlled. Then, we take the expectation of it to derive our uniform stability bound. To achieve the generalization gap over continuously changing parameters, it is imperative to take into account the growth of (δx,k, δy,k) instead of δx,k in (Bao et al., 2021). Appendix C.3 provides more details.
Thus, based on our previous results, we now provide the first generalization bounds in the NC-NC setting for both SSGD and TSGD. Corollary 5. Assume that the problem is NC-NC, |f(·, ·; ξ)| ≤ 1 for all ξ, and Assumptions 1 and 2 hold. Denote l = max{lf , lg} with max{αx, αy} ≤ c/k for constant c ≥ 0. Then, the generalization gap of SSGD 1 with K iterations is bounded by O(K cl cl+1 /m1).
Corollary 6. Assume that the problem is NC-NC, |f(·, ·; ξ)| ≤ 1 for all ξ, and Assumptions 1 and 2 hold. Let l = max{lf , lg} with max{αx, αy} ≤ c/k. Then, the generalization gap of TSGD 2 with K outer iterations and T inner iterations is bounded by O(T 1 Tcl+1K1− 1 Tcl+1 /(m1)).
Remark 6. By Theorem [1, 2, 3], we can derive the above corollaries on generalization gap from stability bounds. Corollary 5 and Corollary 6 show that extremely high number of iterations (K for SSGD and K,T for TSGD) will drastically reduce the stability of these algorithms and increase the generalization gap, which will make these algorithms increase the risk of overfitting. We will also verify it in the following experiments.
5 EXPERIMENTS
In this section, we empirically validate our previous theoretical results on real world datasets. Two experiments, including meta-learning and hyperparameter optimization, are conducted via Algorithm 2 TSGD (note that when T = 1, TSGD is just SSGD). Due to the space limitation, we just present the meta learning experiment here, leaving the hyperparameter optimization experiment and other details in the Appendix D.
5.1 META LEARNING
Consider the few-shot meta-learning problem with M tasks {Ti, i = 1, ...,M} sampled from distribution PT . We aim to learn a model that can rapidly adapt to different tasks. Firstly, the embedding model ϕ is shared by all tasks to learn embedded features. Secondly, the task-specific parameter wi is to adapt the shared embedding to its own sub-problem. Thus, the overall problem of meta-learning can be formulated as follow:
min ϕ LD (ϕ, w̄∗) = Eξ∈Dtei ,Ti [L (ϕ,w ∗ i ; ξ)] , (3a)
s.t. w̄∗ = argmin w̄
[ LDtr(ϕ, w̄) = ETi [ LDtri (ϕ,wi) ]] . (3b)
where Dtri and Dtei are the training and testing datasets for task Ti. Each wi is computed from one or more gradient descent updates from w̄ on the corresponding task (rapid adaptation), i.e., wi = w̄ − α∇w̄LDtr (ϕ,wi). In the inner level, the base learner optimizes the series of wi for each tasks (Equation 3b). In the outer level, the meta-learner optimizes the embedding model ϕ using the minimizers w∗i learned from the inner level and computes the loss from the testing dataset (Equation 3a).
Settings and Implementation We evaluate the behavior of the 5-way-1-shot task on the Omnilot dataset (Lake et al., 2015), i.e., it aims to classify 5 unseen classes from only 1 labeled sample. It contains 1623 different handwritten characters from 50 different alphabets. The image is in greyscale with a size 28 × 28. We follow similar settings in Ji et al. (2021). A five-layer fully-connected network is constructed, where the task-specific parameter wi corresponds to the last layer of the
network and the shared embedding model ϕ corresponds to all preceding layers. Thus, we train the two sets of layers separately in the outer and inner level of optimization. We build our model and establish our training using the software library learn2learn (Arnold et al., 2020). We follow the official train-validation-test partition and train ϕ, wi using the training set. The size of each layer in the network is 784 → 256 → 128 → 64 → 64 → 5. We set the number of tasks for training and testing set to 2000 and the batch size of tasks to 32. The learning rate of ϕ and wi are 0.002 and 0.01, respectively. Results are evaluated based on the average of 5 trial runs with different random seeds.
Results Evaluation Figure 1 presents the learning curves on training set, testing set and the generalization gap with different values of inner iterations T and outer iterations K. Generalization gap is estimated by the difference between training and testing loss. On one hand, it can be seen that the model easily overfits on the testing set as K increases drastically (Figure 1b) and the effect of T is very limited. On the other hand, with an appropriate value of K, smaller T (i.e) will result in underfitting on the testing loss (T = 1 in the Figure 1c causes highest generalization gap due to the underfitting training process). The trend of generalization gap in terms of K and T indicates that large values of iteration numbers will increase the risk of overfitting, which matches with our analysis in Theorem 4 that the stability of TSGD 2 will decrease drastically.
6 CONCLUSION
We give a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization framework. In particular, we establish a quantitative connection between generalization and algorithmic stability and provide the first generalization bounds of the continuous updates for inner parameters and outer parameters in multiple settings. Our experiments suggest that inappropriate iterations will cause underfitting and overfitting easily. The tendency of generalization gap also validates our theoretical results.
From the discussion in previous sections, we only discussed the first-order method, while there exist a number of estimating second-order and momentum-based approaches to solve the bilevel optimization problem. Dealing with the approximation of hypergradient in generalization analysis is another direction for future work.
A COMPARISON BETWEEN UD AND TSGD
Algorithm 3 Unrolled differentiation (UD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0 2: Output: xK , yK 3: for k = 0 to K − 1 do 4: y0k ← y0 5: for t = 0 to T − 1 do 6: yt+1k = y t k−αy∇yg(xk,ytk(xk);Dm2) 7: end for 8: xk+1 = xk − αx∇f(xk,yTk (xk);Dm1) 9: end for
10: return xK , yTK
Algorithm 4 Two-timescale SGD (TSGD) Input: number of iterations K, step sizes αx, αy , initialization x0,y0, Datasets: Dm1 , Dm2
Output: xK , yK for k = 0 to K − 1 do y0k ← yTk−1 for t = 0 to T − 1 do
yt+1k = y t k−αy∇yg(xk,ytk(xk);Dm2)
end for xk+1 = xk − αx∇f(xk,yTk (xk);Dm1)
end for return xK , yTK
B PROOF OF PRELIMINARIES
B.1 THE PROOF OF THEOREM 1
Proof of Part (a). Since ξ and ξi are drawn from the same distribution, we know EA[R(A(Dm1 , Dm2),D1)−Rs(A(Dm1 , Dm2), Dm1)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(Dm1 , Dm2), ξ)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(ξ, ξ2, .., ξi−1, ξi+1, ...ξm1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(D ′ m1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] ≤ β, where D′m1 and Dm1 differ in at most one sample ξi.
Proof of Part (b). Similarly, we have EA[f(A(Dm1 , Dm2),D1)− f(A(Dm1 , Dm2), Dm1)] = EA,ξiDm1 ,ξ∼D1 [f(A(Dm1 , Dm2), ξ)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(ξ, ξ2, .., ξi−1, ξi+1, ...ξm1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(D ′ m1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)]
≤ EA,ξi∈Dm1 ,ξ∼D1 [Lf∥A(D ′ m1 , Dm2)−A(Dm1 , Dm2)∥ ≤ Lfβ.
To prove high probability bounds, we need the following lemma on the concentration behavior on the summation of weakly dependent random variables. Lemma 7 (Bousquet et al. 2020). Let Z = (Z1, . . . , Zn) be a vector of independent random variables with each taking values in Z , and g1, . . . , gn be some functions gi : Zn → R such that the following holds for any i ∈ [n] :
• |E [gi(Z) | Zi]| ≤M a.s., • E [ gi(Z) | Z[n]\{i} ] = 0 a.s.,
• gi has a bounded difference β with respect to all variables except for the i-th variable.
Then, for any p ≥ 2, ∥∥∥∥∥ n∑
i=1
gi(Z) ∥∥∥∥∥ p ≤ 12 √ 2pnβ ⌈log2 n⌉+ 4M √ pn,
where the Lp-norm of a random variable Z is denoted by ∥Z∥p := (E[|Z|p])1/p, p ≥ 1.
Next, we state the following well-known relationship between tail bounds and moment bounds.
Lemma 8 (Bousquet et al. 2020; Vershynin 2018). Let a, b ∈ R+. Let Z be a random variable with ∥Z∥p ≤ √ pa+ pb and p ≥ 2. Then, for any δ ∈ (0, 1), we have, with probability at least 1− δ
|Z| ≤ e ( a √ log( e
δ ) + b log(
e δ ) ) .
Proof of Part (c). In order to make use of Lemma 7 to obtain the generalization bounds, we will introduce:
hi = Eξ′i∼D1 [Eξi∼D1 [f(A(D i m1 , Dm2 ; ξ))]− f(A(D i m1 , Dm2 ; ξi)],
where Dim1 = {ξ1, ξ2, ..., ξi−1, ξ ′ i, ξi+1, ..., ξm1}, and ξ′i obeys identical distribution of ξi.
Hence, we have:
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)|
= 1
m1 ∣∣∣ m1∑ i=1 (Eξ∼D1f(A(Dm1 , Dm2); ξ)− f(A(Dm1 , Dm2); ξi)) ∣∣∣
≤ 1 m1 ∣∣∣ m1∑ i=1 ( Eξ∼D1f(A(Dm1 , Dm2); ξ)− Eξ∼D1,ξ′i∼D1f(A(D i m1 , Dm2); ξ) ) ∣∣∣ +
∣∣∣∣∣ 1m1 m1∑ i=1 Eξ′i∼D1Eξ∼D1 [ f(A(Dim1 , Dm2); ξ) ] − f(A(Dim1 , Dm2); ξi) ∣∣∣∣∣ + 1
m1 ∣∣∣∣∣ m1∑ i=1 ( Eξ′i∼D1f(A(D i m1 , Dm2); ξi)− f(A(Dm1 , Dm2); ξi) )∣∣∣∣∣ . It then follows from the definition of uniform stability that
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)|
≤2β + ∣∣∣∣∣ 1m1 m1∑ i=1 Eξ′i∼D1Eξ∼D1 [ f(A(Dim1 , Dm2); ξ) ] − f(A(Dim1 , Dm2 ; ξi)) ∣∣∣∣∣ =2β + 1
m1 ∣∣∣∣∣ m1∑ i=1 hi ∣∣∣∣∣ . Notice that all conditions of 7 hold. Thus, the following outcome can be derived for any p ≥ 2:∥∥∥∥∥ m1∑ i=1 hi(ξ) ∥∥∥∥∥ p ≤ 12 √ 2pm1β ⌈log2 m1⌉+ 4M √ pm1.
Combining Lemma 7 and Lemma 8 with hi defined above, we have the following inequality with probability 1− δ: ∣∣∣∣∣ m1∑ i=1 hi(ξ) ∣∣∣∣∣ ≤ e( 4M√m1 √ log e δ + 12 √ 2β ⌈log2 m1⌉ √ log e δ ) .
The deviation bound now follows immediately:
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)| ≤ 2β + e ( 4M √ m1 √ log e δ + 12 √ 2β ⌈log2 m1⌉ √ log e δ ) .
The proof is completed.
C MAIN PROOF
C.1 APPROXIMATE EXPANSIVITY OF UPDATE RULES
With step size αx and αy , the update rules for single-timescale can be presented:
Gs ([ x y ]) := [ x− αx∇f(x,y) y − αy∇yg(x,y) ] .
Definition 5 (expansivity). An update rule is η-expansive if for every x, x′ ∈ Rd1 , y, y′ ∈ Rd2 : ∥G(x,y)−G (x′,y′)∥2 ≤ η √ ∥x− x′∥22 + ∥y − y′∥ 2 2.
Lemma 9. Suppose that Assumptions 1 and 2 hold for Problem (1). Then:
1. If f and g are non-convex functions, then Gs is (1+max{lfαx, lgαy})-expansive with step size αx, αy .
2. If f and g are convex functions, then Gs is ( √
2 + 2max{(lfαx)2, (lgαy)2})-expansive with step size αx, αy . 3. If f and g are strongly-convex with µf and µg respectively, then Gs is√ 2 (1− 2αx (µf + µg) + αx2l2)-expansive with step size:
(uf + µg)− √ (uf + µg) 2 − 0.5l2
l2 ≤ αx = αy
≤ min 1µf + µg , (uf + µg) + √ (uf + µg) 2 − 0.5l2 l2 . Proof. In Case 1 with the NC-NC objectives and the smoothness of objectives on Assumptions 1 and 2, we have∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥ = ∥∥∥∥[ x− x′ − αx (∇f(x,y)−∇f (x′,y′))y − y′ + αy (∇yg(x,y)−∇yg (x′,y′)) ]∥∥∥∥
≤ ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥+ ∥∥∥∥[ αx (∇f(x,y)−∇f (x′,y′))αy (∇yg(x,y)−∇yg (x′,y′)) ]∥∥∥∥
≤ (1 + max{lfαx, lgαy}) ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ . In case 2, with the monotonicity of the convex objective’s gradient, we have:
⟨x− x′, αx (∇f(x,y)−∇f (x′,y))⟩ ≥ 0 ⟨y − y′, αy (∇yg(x′,y)−∇yg (x′,y′))⟩ ≥ 0.
Thus, the stated result then follows:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2 [ x− x′y − y ]T [ αx (∇f(x,y)−∇f (x′,y)) αy (∇yg(x,y)−∇yg (x′,y)) ] ] + ∥∥∥∥[ αx (∇f(x,y)−∇f (x′,y))αy (∇yg(x,y)−∇yg (x′,y)) ]∥∥∥∥2
≤ max{(lfαx)2, (lgαy)2} ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + ∥x− x′∥2 . (4)
and∥∥∥∥Gs([ x′y ]) −Gs ([ x′ y′ ])∥∥∥∥2 = ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥2 − 2 [ x′ − x′y − y′ ]T [ αx (∇f(x′,y′)−∇f (x′,y)) αy (∇yg(x′,y′)−∇yg (x′,y)) ] ] + ∥∥∥∥[ αx (∇f(x′,y)−∇f (x′,y′))αy (∇yg(x′,y)−∇yg (x′,y′)) ]∥∥∥∥2
≤ max{(lfαx)2, (lgαy)2} ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥2 + ∥y − y′∥2 . (5)
Combining the above equations 6, 7 and inequality ( ∑k
i=1 ak) 2 ≤ k ∑k i=1 a 2 k, we can derive the
expansive of update rule Gs under convexity condition:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥2 ≤ (2 + 2max{(lfαx)2, (lgαy)2})∥∥∥∥[ x− x′y − y′ ]∥∥∥∥2 .
If f and g are strongly-convex, then, f̃(x,y) = f(x,y)− µf2 (∥x∥ 2 +∥y∥2) and g̃(x,y) = g(x,y)− µg 2 (∥x∥
2 + ∥y∥2) will be convex. With the above conclusions, we can derive the following:∥∥∥∥GT ([ xy ]) −Gs ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2αx [ x− x′y − y ]T [ (∇f(x,y)−∇f (x′,y)) (∇yg(x,y)−∇yg (x′,y))
] + αx 2 ∥∥∥∥[ (∇f(x,y)−∇f (x′,y))(∇yg(x,y)−∇yg (x′,y)) ]∥∥∥∥2
= (1− (αxµf + αxµg))2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + αx2 ∥∥∥∥[ (∇f̃(x,y)−∇f̃ (x′,y))(∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥2
− 2 (1− αxµf − αxµg)αx [ x− x′ y − y ]T [ (∇f̃(x,y)−∇f̃ (x′,y)) (∇yg̃(x,y)−∇yg̃ (x′,y)) ] ≤ ( 1− 2αx (µf + µg) + αx2l2 ) ∥x− x′∥2 .
The penultimate inequality arises from the smoothness of f̃ , g̃, which is based on our assumption for simplicity that l = max{lf , lg}, and the details will be revealed as follows:
l2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 ≥ ∥∥∥∥[ ∇f(x,y)−∇f (x′,y)∇yg(x,y)−∇yg (x′,y) ]∥∥∥∥2
= ∥∥∥∥∥ [ ( ∇f̃(x,y)−∇f̃ (x′,y) ) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2
+ 2 (µf + µg) [ x− x′ y − y ]T [ (∇f̃(x,y)−∇f̃ (x′,y)) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]
≥ ∥∥∥∥∥ [ ( ∇f̃(x,y)−∇f̃ (x′,y) ) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 .
Similar to the convex case, we can have:∥∥∥∥GT ([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥2 ≤ 2 (1− 2αx (µf + µg) + αx2l2) ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥2 .
C.2 SINGLE TIMESCALE
We first introduce the following lemma before providing the proof of the Theorem.
Lemma 10 (Hardt et al. (2016)). Consider two sequences of updates G1s, ..., GKs and
(G1s) ′, ..., (GKs ) ′ with initial points x0 = x′0, y0 = y ′ 0. Define δk = √ ∥xk − x′k∥ 2 + ∥yk − y′k∥
2. Then, we have:
δk+1 ≤ ηδk if Gks = (G k s) ′ is η-expansive min(η, 1)δk + 2σ if sup ∥∥∥∥[ xy ] −G ([ x y ])∥∥∥∥ ≤ σ Gks is η expansive
Proof. The first part of the inequality is obvious from the definition of expansivity and the assumption of Gks = (G k s) ′. For the second bound, note that:
δk+1 = ∥∥∥∥Gs([ xkyk ]) −G′s ([ x′k y′k ])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) − [ xk yk ] + [ x′k y′k ] −G′s ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ xk − x′kyk − y′k ]∥∥∥∥
≤ δk + ∥∥∥∥Gs([ xkyk ]) − [ xk yk ]∥∥∥∥+ ∥∥∥∥G′s([ x′ky′k ]) − [ x′k y′k ]∥∥∥∥ ≤ δk + 2σ.
Also, δk+1 can be further expressed as:
δk+1 = ∥∥∥∥Gs([ xkyk ]) −G′s ([ x′k y′k ])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) −Gs ([ x′k y′k ]) +Gs ([ x′k y′k ]) −G′s ([ x′k y′k
])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) −Gs ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ x′ky′k ] −Gs ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ x′ky′k ] −G′s ([ x′k yk′
])∥∥∥∥ ≤ ηδk + 2σ.
Combining the above completes the proof of the Lemma 10.
Now, we are ready to prove Theorem 2:
Proof of Part(a). Suppose that Dm1 and D ′ m1 are two neighboring sets differing only in one sample. Consider the updates G1s, ..., G K s and (G 1 s) ′, ..., (GKs ) ′. We can observe that the example chosen by the algorithm is the same in Dm1 , D ′ m1 at step k with probability 1−1/m1 and different with proba-
bility 1/m1. In the former case, we have identical update rules, while √ 1− 2αx (µf + µg) + α2xl2-
expansive can be employed in the latter through lemma 10. E [δk+1] ≤ ( 1− 1
m1
)( 2 ( 1− 2αx (µf + µg) + α2xl2 ))1/2 E [δk] + 1 m1 E [δk] + 1 m1 2 √ (αxLf )2 + (αxLg)2
≤ ( 2 ( 1− 2αx (µf + µg) + α2xl2 ))1/2 E [δk] + 2 m1 √ (αxLf ) 2 + (αxLg) 2
≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
k∑ i=0 ( 2 ( 1− 2αx (µy + µg) + α2xl2 ))i/2 ≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
∞∑ i=0 ( 2 ( 1− 2αx (µf + µg) + α2xl2 ))i/2 (1) ≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
∞∑ i=0 ( 1− 2αx (µf + µg) + α2xl2 + 0.5 )i = √ (αxLf )2 + (αxLg)2
m1 ( αx (µf + µg)− α 2 xl 2 2 + 0.25 )
=
√ L2f + L 2 g
m1 (µf + µg − (αxl)2/2 + 0.25) .
Here (1) comes from the mean equality √ ab ≤ (a + b)/2 for any a, b ≥ 0 and the assumption of (uf+µg)− √ (uf+µg) 2−0.5l2
l2 ≤ αx ≤ (uf+µg)+
√ (uf+µg)
2−0.5l2 l2 , which finishes the proof.
Proof of Part(b). The proof of Part(b) is analogous to the above, thus we use the same notations for this part.
E [δk+1] ≤ ( 1− 1
m1
)( 2 + 2max { l2fα 2 x, l 2 yα 2 y })1/2 E [δk] + 1 m1 E [δk] + 2 m1 √ L2fα 2 x + L 2 gα 2 y
= ( 2 + 2max { l2fα 2 x, l 2 gα 2 y })1/2 E [δk] + 2 √ L2fα 2 x + L 2 gα 2 y
m1 E [δk] ≤ 2 √ L2fα 2 x + L 2 gα 2 y
m1 ·
( 2 + 2max { l2fα 2 x, l 2 gα 2 y }) k+1 2 − 1√
2 + 2max { l2fα 2 x, l 2 gα 2 y } − 1
E [δk] ≤ O √ L2fα 2 x + L 2 gα 2 y ( 2 + 2max { l2fα 2 x, l 2 gα 2 y }) k+1 2
m1
.
To prove stability in the NC-NC case, we introduce the following lemma:
Lemma 11 (Hardt et al. (2016)). Assume that f(x,y; ξ) is Lf -Lipschitz continuous and 0 ≤ f(x,y; ξ) ≤ 1. Let Dm1 and D′m1 be two datasets differing in only one sample. Denote (xK ,yK) and (x′K ,y ′ K) as the output of K steps of SSGD (single-timescale algorithm) on Dm1
and D′m1 , respectively. Then, the following holds for every k ∈ {0, 1, ...,K}, where δk =√ ∥xk − x′k∥ 2 + ∥yk − y′k∥ 2:
E [|f (xk,yk; ξ)− f (x′k,y′k; ξ)|] ≤ k0 m1 + LfE [δk | δk0 = 0] .
Proof of Part(c). Applying Lemma 11, we get ready to prove the NC-NC case. Analogous to the previous case, we have:
E [δk+1] ≤ ( 1− 1
m1
)( 1 + cl
k
) E [δk] + 1
m
( 1 + cl
k
) E [δk] +
2c √ l2f + l 2 g
k
= ( 1 + cl
k
) E [δk] +
2c √ l2f + l 2 g
m1k .
The following can be derived:
E [δK | δk0 = 0] ≤ K∑
k=k0+1
T∏ t=k+1 ( 1 + cl t ) 2c√l2f + l2g m1k
≤ K∑
k=k0+1
T∏ t=k+1 { exp ( cl t )} 2c√l2f + l2g m1k
≤ K∑
k=k0+1
exp
( K∑
t=k+1
cl
t
) 2c √ l2f + l 2 g
m1k
≤ k∑
k=k0+1
exp(cl · log(K/k)) 2c √ l2f + l 2 g
m1k ≤ 2c √ l2f + l 2 g
m1
K∑ k=k0+1 k−cl−1
≤ 2 √ l2f + l 2 g
m1l
( K
k0
)cl .
Hence, Lemma 11 indicates:
E [|f(x, y)− f (x′, y′)|] ≤ k0 m1
+ 2Lf
√ l2f + l 2 g
m1l
( K
k0
)cl .
The right hand side is approximately minimized when k0 = ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1 .
Therefore, we have
β ≤ O ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1
m1cl for argument stability.
C.3 TWO-TIMESCALE SGD (TSGD)
C.3.1 STANDARD SETTINGS
With step size αx and αy , the update rule for two-timescale can be presented as:
GT ([ xk yk ]) := [ xk − αx∇f(xk,yTk ) yk − αy ∑T t=1∇yg(xk,ytk) ] .
Analogous to the single-timescale case, we first provide the expansivity of the update rules.
Lemma 12. Suppose that Assumptions 1 and 2 hold for Problem (1). Let αl = max{αxlf , 1+(αylg) 2
1−αylg } for simplicity sake and assume αylg ≤ 1. Then:
1. If f and g are non-convex functions, GT is (1 + αlT )-expansive.
2. If f and g are convex functions, GT is (1 + αl)-expansive with step size αx, αy .
3. If f and g are strongly-convex with µf and µg respectively, GT is 1 + αl-expansive with step size:
αx = αy ≤ 1
µf + µg .
Proof. In Case 1 with the NC-NC objectives by the triangle inequality, we have:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ ∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥+∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ The first item can be derived from:∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥ = ∥∥∥∥[ x− x′ − αx (∇f(x,y)−∇f (x′,y))y − y + αy∑Ty=1 (∇yg(x,yt)−∇yg (x′,yt)) ]∥∥∥∥
≤ (1 + αyT lg) ∥x− x′∥
The second item can be derived from:∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ = ∥∥∥∥∥ [ x′ − x′ − αx (∇f(x′,y)−∇f (x′,y′)) y − y′ + ∑T−1 t=0 αy ( ∇yg(x′,yt)−∇yg ( x′,yt ′ )) ]∥∥∥∥∥
≤ ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥+ ∥∥∥∥∥ [ αx (∇f(x′,y)−∇f (x′,y′))∑T−1 t=0 αy ( ∇yg(x′,yt)−∇yg ( x′,yt ′ )) ]∥∥∥∥∥
From the Lipschitz continuous, we have:
T−1∑ t=0 αy ( ∇yg ( x,yt ) −∇yg ( x,yt )) ≤ T−1∑ t=0 αylg ∥∥yt − yt∥∥
Now we consider the t-th update: αylg ∥∥yt − yt∥∥ = αylg ∥∥yt−1 − αy∇yg (x′,yt−1)− yt−1 + αy∇yg (x′,yt−1)∥∥
≤ αylg ∥∥∥yt−1 − (yt−1)′∥∥∥+ (αylg)2 ∥∥∥yt−1 − (yt−1)′∥∥∥
· · · ≤ (αylg)t ∥∥∥y0 − (y0)′∥∥∥+ (αylg)t+1 ∥∥∥y0 − (y0)′∥∥∥
According to the accumulation of the both side, we have: T−1∑ t=0 αylg ∥∥∥yt − (yt)′∥∥∥ ≤ αylg ∥∥∥y0 − (y0)′∥∥∥ ∥ T−1∑ t=1 [ (αylg) t ∥∥∥y0 − (y0)′∥∥∥+ (αylg)t+1 ∥∥∥y0 − (y0)′]∥∥∥
=
[ 1− (αylg)T
1− αylg +
(αylg) 2 − (αylg)T+1
1− αylg ]∥∥∥y0 − (y0)′∥∥∥ = [ 1− (αylg)T + (αylg)2 − (αylg)T+1
1− αylg ]∥∥∥y0 − (y0)′∥∥∥ ≤ 1 + (αylg) 2
1− αylg ∥∥y − (y)′∥∥
Let αl = max{αylg, 1+(αylg) 2
1−αylg }, then:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ (1 + Tαl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
In case 2, with the monotonicity of the convex objective’s gradient, we have:
⟨x− x′, αx (∇f(x,y)−∇f (x′,y))⟩ ≥ 0 ⟨y − y′, αy (∇yg(x′,y)−∇yg (x′,y′))⟩ ≥ 0.
Thus, the stated result then follows:∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2 [ x− x′y − y ]T [ αx (∇f(x,y)−∇f (x′,y))∑T−1 t=0 αy ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]
+ ∥∥∥∥∥ [ αx (∇f(x,y)−∇f (x′,y))∑T−1 t=0 αy ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
≤ max (lfαx)2, ( 1 + (αylg) 2
1− αylg
)2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + ∥x− x′∥2 . (6)
and the second decomposition can be obtained by the NC-NC case:∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ ( 1 + max{lfαx, 1 + (αylg) 2
1− αylg }
) ∥y − y′∥ . (7)
let αl = max{αxlf , 1+(αylg) 2 1−αylg }. Combining the above equations 6, 7 and inequality √ 1 + (αl)2 ≤ (1 + αl)2, then we can derive the expansive of update rule GT under convexity condition:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ (1 + αl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
If f and g are strongly-convex, then, f̃(x,y) = f(x,y)− µf2 (∥x∥ 2 +∥y∥2) and g̃(x,y) = g(x,y)− µg 2 (∥x∥ 2 + ∥y∥2) will be convex. Let αx = αy = α and denote αl = max{αxlf , 1+(αylg) 2
1−αylg }, we can derive the following with the conclusions from the convex case:∥∥∥∥GT ([ xy ]) −GT ([ x′ y
])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2αx [ x− x′y − y ]T [ (∇f(x,y)−∇f (x′,y))∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]
+ αx 2 ∥∥∥∥∥ [ (∇f(x,y)−∇f (x′,y))∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
= (1− (αxµf + αxµg))2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + αx2 ∥∥∥∥∥ [ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ]∥∥∥∥∥ 2
− 2 (1− αxµf − αxµg)αx [ x− x′ y − y ]T (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ ))
≤ ( 1− 2α (µf + µg) + α2l2 ) ∥x− x′∥2 .
The penultimate inequality arises from the smoothness of f̃ , g̃, which is based on our assumption for simplicity that l = max{lf , 1+(αylg) 2
(1−αylg)αy }, and the details will be revealed as follows:
l2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 ≥ ∥∥∥∥∥ [ ∇f(x,y)−∇f (x′,y)∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
= ∥∥∥∥∥∥ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1
t=0
( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ∥∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2
+ 2 (µf + µg) [ x− x′ y − y ]T (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ ))
≥ ∥∥∥∥∥∥ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1
t=0
( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ∥∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 .
Similar to the convex case, we can have:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥ ≤ (1 + αl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
Proof. Because the main proof of Lemma 12 is similar to that of Lemma 9, we omit it.
Next, we give a bound for the update rule GT and prepare to prove Theorem 3. Since g() is a lg-smooth function, we have:
g ( x,yt+1 ) ≤ g ( x,yt ) + 〈 ∇g ( x,yt ) ,yt+1 − yt 〉 +
lg 2 ∥∥yt+1 − yt∥∥2 . ≤ g ( x,yt ) − 〈 ∇g ( x,yt ) , αy∇g ( x,yt )〉 +
lg 2 ∥∥αy∇g (x,yt)∥∥2 ≤ g ( x,yt ) − αy ( 1− αylg
2 )∥∥∇g (x,yt)∥∥2 . The two sides are accumulated from t = 1 to t = T and we could derive the following by Cauchy–Schwarz inequality:
T∑ t=1 ∥∥∇g (x,yt)∥∥2 ≤ g (x,y1)− g (x,yT ) αy (2− αylg) ⇒
( T∑
i=1
∇g(x,yt) )2 ≤ T
T∑ i=1 ∇g2(x,yt)
≤ T (g (x,y1)− g (x,yT )) αy(2− αylg) .
Hence, the bound of GT equals to √ L2fα 2 x + ( T (g(x,y1)−g(x,yT ))
αy(2−αylg)
)2 . Now, we are ready to give the
proof of Theorem 3.
Proof of Part(a). Suppose that Dm1 and D ′ m1 are two neighboring sets differing in only one sample. Consider the updates G1T , ..., G | 1. What are the main contributions and strengths of the paper regarding bilevel optimization algorithms?
2. What are the weaknesses and limitations of the paper, especially regarding the bound's tightness and the summary of relevant work?
3. How does the paper compare with existing stability analysis on specific bilevel problems such as minmax problems and meta-learning?
4. Why are the definitions of uniform stability with probability 1 - δ and Theorem 1 (c) needed?
5. How do the proofs of TSGD case differ from SSGD case, and why should they be included or omitted?
6. What is the significance of using ∇yfg instead of ∇xf?
7. How can the paper improve its clarity, quality, novelty, and reproducibility? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the generalization of bilevel optimization algorithms in the framework of algorithm stability. It focuses on two algorithms, namely, single-time scale (SSGD) and two-time scale (TSGD) with first-order stochastic gradient updates.
Different from prior work, the algorithms considered do not require reinitialization in the inner loop. Experiments on data reweighting and meta-learning are provided to show how the generalization performance changes w.r.t. the number of iterations.
Strengths And Weaknesses
Strengths
The paper is clearly written and easy to follow.
The paper considers two algorithms SSGD, and TSGD without reinitialization in the inner loop, which are different from prior works and are important in bilevel optimization. It fills the gap in the current literature.
Weaknesses
1. The bound is loose.
1.1) The stability bounds derived in this paper seem to have worse dependence on the outer iteration number
K
than [Bao et al. 2021]. It has exponential dependence in
K
in the worst case in this paper while only polynomial dependence on
K
in [Bao et al. 2021].
1.2) No discussion of the tightness of the bound is provided in either theoretical or empirical form.
1.3) It is contour-intuitive that when
f
and
g
are strongly convex, the stability has worse rate on
K
(exponential) than when
f
g
are non-convex (polynomial). I believe the bounds of SC-SC and C-C cases are not tight.
2. Incorrectness
2.1) Summary of the most relevant work [Bao et al. 2021] is incorrect.
a) My understanding for paper [Bao et al. 2021] is that it assumes the compositional function
f
(
x
,
y
^
(
x
,
D
m
2
)
;
ξ
)
w.r.t.
x
is Lipschitz continuous, which also has a dependence in number of iterations
T
in the inner loop. Therefore, the summary in Table 1 for UD[Bao et al. 2021] which does not depend on
T
is not accurate. In addition, [Bao et al. 2021] provides the results of convex case in the appendix.
b) What is
∇
f
? The comparison of UD[Bao et al. 2021] and TSGD in appendix A is not correct as the gradient of
x
in the outer loop is not the same based on my understanding. Because it requires second-order information for UD[Bao et al. 2021].
2.1) Outer update/reference of SSGD and TSGD is incorrect.
a) In Algorithms 1 & 2, the update for outer parameter
x
does not take gradient w.r.t. function
y
^
? Could you add references to the two algorithms analyzed in this paper, for single timescale and two-time scale, separately?
b) If the update for outer parameter
x
does not take gradient w.r.t. function
y
^
, it is not the same as the algorithm for some two-time scale methods referenced in this paper such as [Ghadimi & Wang, 2018], [Ji et al. 2021] as they require the second-order information of the loss during the update of
x
.
3. Lack of clarity
a) How does the results in this paper compare with existing stability analysis on specific bilevel problems such as minmax problems and meta-learning?
b) Although different algorithm stability concepts are defined, only uniform-stable in expectation is used for the main theorems in Section 4. Why do you need to define uniform stability with probability
1
−
δ
in Definition 4, (a)?
c) Why do you need Theorem 1 (c)? Theorem 1 (c) is not used in Section 4.
d) The proofs are not clear. Some proofs of the TSGD case are omitted due to claimed similarity of the SSGD case (see pages 20-21). However, I suggest that you include proof for TSGD case and omit the SSGD case.
3.2 Minor comments
a) In Theorem 1 (c), what is
e
?
b) In Theorem 1 (c), A is uniform stable with probability at least
1
−
δ
?
c) Throughout the paper, why do you use
∇
y
g
for the update of
y
and
∇
f
for the update of
x
but not
∇
x
f
?
d) The notation
G
is used for both inner population risk and update function, please use different notations to avoid confusion.
Clarity, Quality, Novelty And Reproducibility
Clarity
The writing is clear. However, some of the claims are not well supported. See Weaknesses – Incorrectness. And Weaknesses-Lack of Clarity.
Quality The paper deals with an important problem of algorithm stability in bilevel optimization. The paper and the proof is clearly written. Experiments on meta-learning and data reweighting are conducted.
Novelty Though prior works exist that consider algorithm stability of bilevel learning, this paper considers two different algorithms without reinitialization in the inner loop. |
ICLR | Title
On Stability and Generalization of Bilevel Optimization Problems
Abstract
(Stochastic) bilevel optimization is a frequently encountered problem in machine learning with a wide range of applications such as meta-learning, hyper-parameter optimization, and reinforcement learning. Most of the existing studies on this problem only focused on analyzing the convergence or improving the convergence rate, while little effort has been devoted to understanding its generalization behaviors. In this paper, we conduct a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization problem. We first establish a fundamental connection between algorithmic stability and generalization gap in different forms and give a high probability generalization bound which improves the previous best one from O( √ n) to O(log n), where n is the sample size. We then provide the first stability bounds for the general case where both inner and outer level parameters are subject to continuous update, while existing work allows only the outer level parameter to be updated. Our analysis can be applied in various standard settings such as strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC). Our analysis for the NC-NC setting can also be extended to a particular nonconvex-stronglyconvex (NC-SC) setting that is commonly encountered in practice. Finally, we corroborate our theoretical analysis and demonstrate how iterations can affect the generalization gap by experiments on meta-learning and hyper-parameter optimization.
N/A
√ n) to O(log n), where n is the
sample size. We then provide the first stability bounds for the general case where both inner and outer level parameters are subject to continuous update, while existing work allows only the outer level parameter to be updated. Our analysis can be applied in various standard settings such as strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC). Our analysis for the NC-NC setting can also be extended to a particular nonconvex-stronglyconvex (NC-SC) setting that is commonly encountered in practice. Finally, we corroborate our theoretical analysis and demonstrate how iterations can affect the generalization gap by experiments on meta-learning and hyper-parameter optimization.
1 INTRODUCTION
(Stochastic) bilevel optimization is a widely confronted problem in machine learning with various applications such as meta-learning (Finn et al., 2017; Bertinetto et al., 2018; Rajeswaran et al., 2019), hyper-parameter optimization (Franceschi et al., 2018; Shaban et al., 2019; Baydin et al., 2017; Bergstra et al., 2011; Luketina et al., 2016), reinforcement learning (Hong et al., 2020), and few-shot learning (Koch et al., 2015; Santoro et al., 2016; Vinyals et al., 2016). The basic form of this problem can be defined as follows
min x∈Rd1
R(x) = F (x,y∗(x)) := Eξ [f (x,y∗(x); ξ)]
s.t. y∗(x) = arg min y∈Rd2
{G(x,y) := Eζ[g(x,y; ζ)]} , (1)
where f : Rd1 × Rd2 → R and g : Rd1 × Rd2 → R are two continuously differentiable loss functions with respect to x and y. Problem (1) has an optimization hierarchy of two levels, where the outer-level objective function f depends on the minimizer of the inner-level objective function g.
Due to its importance, the above bilevel optimization problem has received considerable attention in recent years. A natural way to solve problem (1) is to apply alternating stochastic gradient updates with approximating ∇yg(x,y) and ∇f(x,y), respectively. Briefly speaking, previous efforts mainly examined two types of methods to perceive an approximate solution that is close to the optimum y∗(x). One is to utilize the single-timescale strategy (Chen et al., 2021; Guo et al., 2021; Khanduri et al., 2021; Hu et al., 2022), where the updates for y and x are carried out simultaneously. The other one is to apply the two-timescale strategy (Ghadimi & Wang, 2018; Ji et al., 2021;
Hong et al., 2020; Pedregosa, 2016), where the update of y is repeated multiple times to achieve a more accurate approximation before conducting the update of x.
While there is a long list of work on bilevel optimization, most of the existing work only focuses on either analyzing its convergence behaviors (Ghadimi & Wang, 2018; Hong et al., 2020; Ji et al., 2021) or improving its convergence rate, based on the convexity and the smoothness properties of f(·, ·) and/or g(·, ·) (Liu et al., 2020; Li et al., 2020). Contrarily, only little effort is devoted to understanding the generalization behavior of the problem. To the best of our knowledge, there is only one recent work on the generalization analysis for bilevel problems (Bao et al., 2021), which presents the first expected uniform stability bound. However, there are still several undesirable issues in this work: (1) Their result is only for the uniform stability (which could be deduced from argument stability with certain conditions, see Definition 4 for details), leaving the analysis of other stronger definitions of algorithmic stability open; (2) Additionally, the UD algorithm allows the outer level parameters to be updated continuously but needs to reinitialize the inner level parameters before each iteration in the inner loop, which is not commonly used in practice due to their inefficiency (see line 4 in Algorithm 3). (3) The proof of Theorem 2 in their work is unclear to show whether the update of outer level parameters is argument dependent on the inner level parameters, where may exist some gap in the analysis of UD algorithm (see Appendix E for detailed discussions). (4)Their experiments take only hyper-parameter optimization into consideration and neglect other applications in the bilevel optimization instances.
To address all the aforementioned issues, we give in this paper a thorough analysis on the generalization behaviors of first-order (gradient-based) methods for general bilevel optimization problem. We employ the recent advances of algorithmic stability to investigate the generalization behaviors in different settings. Specifically, our main contributions can be summarized as follows:
• Firstly, we establish a fundamental connection between generalization gap and different notations of algorithmic stability (argument stability and uniform stability) for any randomized bilevel optimization algorithms in both expectation and high probability forms. Specifically, we show that the high probability form of the generalization gap bound can be improved from O( √ n) to O(log n) compared with the result in Bao et al. (2021).
• Next, we present the stability bounds for gradient-based methods with either singletimescale or two-timescale update strategy under different standard settings. To the best of our knowledge, this work provides the first stability bounds for the two-timescale (double loop) algorithms, which allows the accumulation of the sub-sampled gradients in the inner level. In detail, we consider the settings of strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC), and further extend our analysis to a particular nonconvex-strongly-convex (NC-SC) setting that is widely appeared in practice. Table 1 is the summary of our main results.
• Thirdly, we provide the first generalization bounds for the case where both the outer and inner level parameters are subject to continuous (iterative) changes. Compared to the previous work (Bao et al., 2021), our work does not need the reinitialization step before each iteration in the inner level and hence our algorithm can carry over the last updated inner level parameters, which is more general and practical.
• Finally, we conduct empirical studies to corroborate our theories via meta-learning and hyperparameter optimization, which are two applications of bilevel optimization.
Due to space limitations, all the proofs and additional experiments are included in Appendix.
1.1 RELATED WORK
Research at the interface between generalization and the bilevel problem can be roughly classified into two categories. The first one includes all the research on bilevel optimization. In recent decades, extensive studies have been done on this topic, which suggests that bilevel optimization has a wide range of applications in machine learning such as hyper-parameter optimization (Franceschi et al., 2018; Lorraine & Duvenaud, 2018; Okuno et al., 2021), meta learning (Bertinetto et al., 2018; Rajeswaran et al., 2019; Soh et al., 2020) and reinforcement learning (Yang et al., 2018; Tschiatschek et al., 2019). Most of the existing work studies the problem from an optimization perspective. For example, Ghadimi & Wang (2018); Ji et al. (2021) provide the convergence rate analysis based on the nonconvex-strongly-convex assumption for the two functions f(·, ·) and g(·, ·). (Grazzi et al., 2020) considers the iteration complexity for hypergradient computation. (Liu et al., 2020; Li et al., 2020) present an asymptotic analysis for the convex-strongly-convex setting. Perhaps the most related one to ours from the generalization standpoint (i.e., the expectation of population risk and empirical risk) is Bao et al. (2021), while there may exist some gap in the analysis of UD algorithm. In this work, we employ a novel approach to examine the stability bounds of bilevel optimization problems. Firstly, our work analyzes the generalization behavior by observing how different settings can have an impact on the stability bounds directly. Secondly, our work adopts a stronger version of stability called argument stability, which can imply the previously used uniform stability if the function is sufficiently smooth. Furthermore, our work does not need to reinitialize the inner-level parameters and allows them to carry over their last updated parameters at each time updating the inner level. This indicates that y in the inner level is updated iteratively and depends on the current parameter of x, which is more common and efficient in practice.
The second category includes all the work on stability analysis. There is a long list of research on stability and generalization (Bousquet & Elisseeff, 2002; Mukherjee et al., 2006; Shalev-Shwartz et al., 2010). Bousquet & Elisseeff (2002) first introduces the notion of uniform stability and establishes the first framework of stability analysis. Hardt et al. (2016) later extends the stability analysis to iterative algorithms based on stochastic gradient methods for the vanilla stochastic optimization. After that, there are subsequent studies on generalization analysis for various problems via algorithmic stability, such as minmax problems (Lei et al., 2021; Farnia & Ozdaglar, 2021; Zhang et al., 2021) and pairwise learning (Yang et al., 2021; Lei et al., 2020; Xue et al., 2021; Huai et al., 2020). However, it is notable that due to the additional stochastic function in the constraint in the bilevel optimization, all the previous techniques and results cannot be applied to our problem. Although the generalization analysis of minmax optimization is somewhat similar to ours, it involves only one objective function f and a single level in algorithms for typical minmax optimization problems, while in the bilevel optimization algorithms there is an inner level and an outer level, which is considerably more challenging.
2 PRELIMINARIES
2.1 DEFINITIONS AND ASSUMPTIONS
In the following, we give some necessary definitions and assumptions that are widely used in bilevel optimization (Ghadimi & Wang, 2018; Ji et al., 2021; Khanduri et al., 2021) and generalization analysis (Hardt et al., 2016; Lei et al., 2021). Definition 1 (Joint Lipschitz Continuity). A function f(x,y) is jointly L-Lipschitz over Rd1 × Rd2 , if for all x ∈ Rd1 ,y ∈ Rd2 , the following holds, |f(x,y) − f(x′,y′)| ≤ L √ ∥x− x′∥22 + ∥y − y′∥ 2 2.
Definition 2 (Smoothness). A function f is l-smooth over a set S if for all u,w ∈ S the following is true, ∥∇f(u)−∇f(w)∥ ≤ l∥u− w∥. Definition 3 (Strong Convexity). A function f is µ-strongly-convex over a set S, if for all u,w ∈ S, the following holds, f(u) + ⟨∇f(u), w − u⟩+ µ2 ∥w − u∥
2 ≤ f(w). Assumption 1 (Inner-level Function Assumption). We assume the inner stochastic function g(x,y) in (1) satisfies the following: (i) g(x,y) is jointly Lg-Lipschitz for any x ∈ Rd1 and y ∈ Rd2 . (ii) g(x,y) is continuously differentiable and lg-smooth for any (x,y) ∈ Rd1 × Rd2 .
Assumption 2 (Outer-level Function Assumption). We assume the outer stochastic function f(x,y) in (1) satisfies the following: (iii) f(x,y) is jointly Lf -Lipschitz for any x ∈ Rd1 and y ∈ Rd2 . (iv) f(x,y) is continuously differentiable and lf -smooth for any (x,y) ∈ Rd1 × Rd2 .
2.2 PROBLEM FORMULATION
Given two distributions D1 and D2, in the (stochastic) optimization problem we aim to find the minimizer of Problem (1). However, since the distributions are often unknown, in practice we only have two finite-size datasets Dm1 = {ξi | i = 1, ...,m1} ∼ D m1 1 and Dm2 = {ζi | i = 1, ...,m2} ∼ Dm22 , where each ξi and ζi are i.i.d. sampled from D1 and D2, respectively. Based on these datasets, we will design some (randomized) algorithm A with output A(Dm1 , Dm2) = (x,y) ∈ Rd1 ×Rd2 . Our goal is to investigate the generalization behavior of such output. Note that although there are two stochastic functions in the bilevel optimization problem, we only care about the generalization of the outer-level one since it is the one that we prefer to minimize.
Below we define the generalization gap to measure the generalization behavior. Given distribution D1 and a finite data Dm1 ∼ D m1 1 , the population risk function R(x,y,D1) of x, y on D1 is defined as R(x,y,D1) := Eξ∼D1 [f (x,y(x); ξ)], and its empirical risk function on Dm1 is Rs(x,y, Dm1) = 1 m1 ∑m1 i=1 [f (x,y(x); ξi)]. Moreover, for a fixed hyperparameter x ∈ Rd1 and y(x) ∈ Rd2 ( note that y(x) might be dependent on x), we define the difference between the population risk and the empirical risk over (x,y(x)) as the bilevel generalization gap of (x,y(x)): Es[R(x,y) − Rs(x,y)], where Es denotes the expectation of Dm1 ∼ D m1 1 . When there is no ambiguity, we simplify thereafter the notations as follows: R(x,y,D1) = R(x,y) and Rs(x,y, Dm1) = Rs(x,y). Our goal is thus to analyze the bilevel generalization gap of the output of algorithm A(Dm1 , Dm2) based on Dm1 and Dm2 . Since the generalized error depends on the algorithm itself, in the following we will introduce the algorithms to be considered in this paper.
Most of the existing algorithms adopt the following idea: first approximate y∗ on Dm2 for a given parameter x in the inner level and then seek the hyperparameter x∗(Dm1 , Dm2) with corresponding hypothesis y∗(x∗(Dm1 , Dm2), Dm2) by the below estimation:
x̂(Dm1 , Dm2) ≈ argminx Rs(x, ŷ(x, Dm2), Dm1),
where ŷ(x, Dm2) ≈ argminy Gs(x,y, Dm2), (2)
where Gs(x,y, Dm2) is the empirical risk of G(x,y) over Dm2 , i.e., G(x,y, Dm2) = 1 m2 ∑m2 i=1 g (x,y(x); ζi). Most of the current gradient-based (first-order) algorithms for approximating (2) can be categorized into two classes: single-timescale methods and two-timescale methods. The single-timescale method performs the updates for y and x simultaneously via stochastic gradient descent (SGD), while the two-timescale method updates y multiple times before updating x (via stochastic gradient descent). As there are numerous approaches for both classes (see Related Work section for details), in this paper we will analyze the generalization behaviors for the most classical and standard one in each class, i.e., single-timescale SGD (SSGD; Algorithm 1) and twotimescale SGD (TSGD; Algorithm 2). There is a long list of work (Chen et al., 2021), (Ghadimi & Wang, 2018; Ji et al., 2021) based on either SSGD or TSGD.
3 GENERALIZATION AND STABILITY FOR BILEVEL OPTIMIZATION
Algorithmic stability is one of the classical approaches to analyzing the generalization bound for algorithms. Roughly speaking, the algorithmic stability of (randomized) algorithm A measures how the output of algorithm A changes if we change one data sample in the input dataset. While there are various notions of stability, most of the existing work on analyzing the stability of stochastic optimization, pairwise learning and minimax optimization focuses on the uniform-stability (Bousquet & Elisseeff, 2002) and the argument-stability (Liu et al., 2017; Lei & Ying, 2020). Thus, we also adopt these two notions of stability for the bilevel optimization problem. Briefly speaking, uniformstability focuses on the resulting change in population risk function, while the argument-stability considers the resulting change in arguments, i.e., the output of the algorithm. Definition 4 (Algorithmic Stability). Let A : Dm11 ×D m2 2 7→ Rd1×Rd2 be a randomized algorithm.
Algorithm 1 Single-timescale SGD (SSGD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0, Datasets Dm1 and Dm2
2: Output: xK , yK 3: for k = 0 to K − 1 do 4: Uniformly sample i ∈ [m2], j ∈ [m1] 5: yk+1 = yk − αy∇yg(xk,yk(xk); ζi) 6: xk+1 = xk − αx∇f(xk,yk(xk); ξj) 7: end for 8: return xK and yK
Algorithm 2 Two-timescale SGD (TSGD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0 2: Output: xK , yK 3: for k = 0 to K − 1 do 4: y0k ← yTk−1 5: for t = 0 to T − 1 do 6: Uniformly sample i ∈ [m2] 7: yt+1k = y t k − αy∇yg(xk,ytk(xk); ζi) 8: end for 9: Uniformly sample j ∈ [m1]
10: xk+1 = xk − αx∇f(xk,yTk (xk); ξj) 11: end for 12: return xK , yTK
(a) A is β-uniformly-stable if for all datasets Dm1 , D ′ m1 ∼ D m1 1 and Dm2 ∼ D m2 2 such that
Dm1 and D ′ m1 differ in at most one sample, we have the following for any ξ ∼ D1:
EA[|f(A(Dm1 , Dm2), ξ)− f(A(D′m1 , Dm2), ξ)|] ≤ β.
A is β-uniformly-stable with probability at least 1 − δ if we have the following for any ξ ∼ D1 with probability at least 1− δ:∣∣f(A(Dm1 , Dm2), ξ)− f(A(D′m1 , Dm2), ξ)∣∣ ≤ β.
(b) A is β-argument-stable in expectation if for all datasets Dm1 , D ′ m1 ∼ D m1 1 and Dm2 ∼
Dm22 such that Dm1 and D′m1 differ in at most one sample, we have:
EA[∥A(Dm1 , Dm2)−A(D′m1 , Dm2)∥2] ≤ β.
Note that the definition of uniform stability in expectation is the same as the definition in (Bao et al., 2021). Thus, our other definitions can be considered as extensions of the previous stability for bilevel optimization. In the following, we present Theorem 1 as our first result, which shows a crucial relationship between generalization gap and algorithmic stability for an algorithm A. Theorem 1. Let A : ξm1 × ζm2 7→ Rd1 × Rd2 be a randomized BO algorithm.
(a) If A is β-uniform-stable in expectation, then the following holds for Dm1 ∼ D m1 1 , Dm2 ∼
Dm22 : EA,Dm1 [R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))] ≤ β.
(b) If A is β-argument-stable in expectation and Assumption 2 holds, then the following holds for Dm1 ∼ D m1 1 , Dm2 ∼ D m2 2 :
EA,Dm1 [R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))] ≤ Lfβ.
(c) Assume that |f(x,y; ξ)| ≤ M for some M ≥ 0. If A is β-uniform-stable almost surely, then for Dm1 ∼ D m1 1 , Dm2 ∼ D m2 2 , the following holds with probability 1− δ:
|R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))| ≤ 2β + e (
4M √ m1
√ log e
δ + 12 √ 2β⌈log2 m1⌉
√ log e
δ ) where e is the base of the natural logarithms.
Remark 1. The above theorem suggests that the generalization gap can be controlled by several notions of algorithmic stability. Part (a) and Part (b) show that the expectation of generalization gap can be bounded by uniform stability and argument stability with the Lipschitz constant, respectively; Part (c) indicates that the generalization gap for the algorithm is no more than O(β log(m1) +
1/ √ m1) with probability 1 − δ. Compared with the existing work (Bao et al., 2021), Theorem 1 considers argument stability additionally, which is a stronger notion of stability than uniform stability (since uniform stability can be deduced from argument stability with the condition that the function is sufficiently smooth). Moreover, we use the McDiarmid’s inequality and the equivalence of tails and moments for the random variable with a mixture of sub-gaussian and sub-exponential tails (Lemma 1 in Bousquet et al. (2020)), which provide a significantly improved high probability bound in Part (c) (i.e., improving from O(β√m1) in Bao et al. (2021) to O(β logm1)).
4 STABILITY ANALYSIS FOR BILEVEL OPTIMIZATION ALGORITHMS
Motivated by Theorem 1, we can see that to analyze the generalization behaviors for any algorithm, it is sufficient to analyze its stability. As mentioned in the previous Section 2.2, we will consider the stability of SSGD and TSGD. For simplicity we let SC-SC denote the case where f and g both are strongly convex functions. C-C, NC-NC, and NC-SC are also denoted in a similar manner with ”C” representing convex function and ”NC” representing nonconvex function.
4.1 STABILITY BOUNDS FOR SINGLE-TIMESCALE SGD
As we can see from Algorithm 1, SSGD updates y and x simultaneously. In the following we develop stability bounds for this algorithm in different settings. Theorem 2. Suppose that Assumptions 1 and 2 hold and Algorithm A is SSGD with K iterations:
(a) Assume that Problem (1) is SC-SC with strongly convexity parameters µf and µg . Let αx = αy (see Lemma 9 for details) be the step sizes. Denote l = max{lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ((
L2f + L 2 g ) 1 2 ( m1 ( µf + µg − (αxl)2/2 + 0.25 ))−1) .
(b) Assume that Problem (1) is C-C. Let αx, αy be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m−11 √ (αxLf ) 2 + (αyLg) 2 ( 2 + 2max { (αxlf ) 2 , (αylg) 2 })K/2) .
(c) Assume that Problem (1) is NC-NC. Let the step sizes satisfy max {αx, αy} ≤ c/k for some constant c ≥ 0 and l = max {lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ( (m1cl) −1 ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1 ) ,
where lf , lg and Lf , Lg are smoothness constants and Lipschitz constants for f , g, respectively. Remark 2. Note that the above stability bounds are independent of the specific form of the objective function f(·, ·) and the exact form of the sample distribution D1, which are more reliant on the properties of the loss functions and sample size m1, and the stability bounds in the C-C and NC-NC cases are related to the number of iterations additionally. Specifically, Part(a) establishes a stability bound of O(1/m1) in the SC-SC setting and Part(b) considers a C-C case with a stability bound O(κK/21 /m1) related to the number of iterations and the data size, where κ1 is a constant. The NC-NC case is discussed in Part(c) which provides a stability bound ofO(K cl cl+1 /m1), where c is a constant to control the step size and l is the larger smoothness number of lf and lg . The conclusions here match the existing results in minmax problems (Lei et al., 2021; Farnia & Ozdaglar, 2021).
4.2 STABILITY BOUNDS FOR TWO-TIMESCALE SGD
Compared with the above SSGD, Two-timescale SGD (TSGD; Algorithm 2) always achieves more accurate approximate solutions by updating y multiple times before updating x. In this section, we extend our analysis from SSGD to TSGD. Particularly, compared with the results in Bao et al. (2021), we provide stability bounds in Theorem 3 for the case where the inner level parameter
(y) is updated iteratively (i.e., consistency). We further explore in Theorem 4 a particular NC-SC setting, which is commonly appeared in bilevel optimization applications such as meta learning and hyperparameter optimization. Theorem 3. Suppose that Assumptions 1 and 2 hold and |g(·, ·)| ≤ 1. Let A be the TSGD algorithm with K outer-iterations and T inner-iterations. Then we have
(a) Assume that Problem (1) is SC-SC. Let l = max{lf , 1+(αylg) 2
(1−αylg)αy } and α = αx = αy ≤ min{1/lg, 1/(µf + µg)} be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m1 −1 √ L2fα 2 x + ( 2T αy(2− αylg) )2 (1 + αl) K ) .
(b) Assume that Problem (1) is C-C. Let αl = max{αxlf , 1+(αylg) 2
1−αylg } and αx, αy ≤ 1 lg be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m−11 √ L2fα 2 x + ( 2T αy(2− αylg) )2 (1 + αl) K ) .
(c) Assume that Problem (1) is NC-NC. Let the step sizes satisfy max {αx, αy} ≤ c/k for some constant c ≥ 0 and l = max {lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ( (m1Tcl) −1 ( 2cLf √ l2f + T 2l2g ) 1 Tcl+1 ·K Tcl Tcl+1 ) .
Remark 3. Compared with the previous results for SSGD, the stability bounds of TSGD depend on the number of iterations in the outer level loop, the number of iterations in the inner level loop, and the data size in the outer level loop. If the step sizes are sufficiently small, we can see that the bounds in Theorem 3 are asymptotically the same as the bounds of SSGD in Theorem 2. Thus, Theorem 3 can be considered as a generalization of the previous one. The dependence on T also reveals our novelty compared with the existing work of stability analysis for other problems, such as simple SGD and minmax problems. To the best of our knowledge, this work provides the first stability bounds for the two-timescale (double loop) algorithms, which allows the accumulation of the sub-sampled gradients in the inner level. Remark 4. Comparing our results with the ones in (Bao et al., 2021), we have the following observations. 1) They only established the uniform stability bound for the Unrolled Differentiation algorithm 3, where the algorithm is reinitialized at each time entering the inner level loop, indicating that it takes into account the changes to only one parameter in the outer level loop, while our algorithm considers the update for both parameters. 2) Its proof needs to assume that the update of y in the inner level after the reinitialization will not be affected by the value specified for x. However, this assumption is quite uncommon and is probably the reason that they do not need to make any assumption on the inner level objective function (see Appendix E in details). In contrast, our work allows the inner level parameters to be updated consistently (i.e., carrying over the value in the last update), instead of being reinitialized at each time entering the inner level loop. Specifically, we allow yTk to be employed at the beginning of the (k+1)-th outer level iteration, rather than y0. This enables us to obtain different stability bounds for different inner level objective functions from a novel perspective.
In the following, we extend our analysis to a particular NC-SC setting that is frequently encountered in real-world applications and optimization analysis. Theorem 4. Suppose that Assumptions 1 and 2 hold, 0 ≤ f(·, ·) ≤ 1 and Problem (1) is NC-SC. Let A be the TSGD Algorithm with K outer-iterations and T inner-iterations with max {αx, αy} ≤ c/k for constant c ≥ 0. Denote l = max {lf , lg}. Then, A is β-uniform-stable in expectation, where
β ≤ O ( 2cLf √ l2f + l 2 gT 2 ) 1 c(Tl+l−µg)+1 ·K c(Tl+l−µg) c(Tl+l−µg)+1 (T l + l − µg + 2/c)
m1(T l + l − µg)
.
Remark 5. Compared with our previous analysis, we now sketch the technique differences in our analysis. We consider the bound of the term (δx,k, δy,k)T = (∥xk − x′k∥, ∥yk − y′k∥)T , while we employ δk = √ ∥xk − x′k∥ 2 2 + ∥yk − yk′∥22 in the previous analysis, where (xk,yk), (x′k,y ′ k) are the outputs of TSGD after k iterations for Dm1 and D ′ m1 respectively with Dm1 and D′m1 differing in one sample. In the NC-SC setting, we show that (δx,k+1, δy,k+1) T ≤ ((1 + αxl)δx,k, (1 + αxT l)δy,k) T (≤ means the entry-wise inequality), which means our term can be controlled. Then, we take the expectation of it to derive our uniform stability bound. To achieve the generalization gap over continuously changing parameters, it is imperative to take into account the growth of (δx,k, δy,k) instead of δx,k in (Bao et al., 2021). Appendix C.3 provides more details.
Thus, based on our previous results, we now provide the first generalization bounds in the NC-NC setting for both SSGD and TSGD. Corollary 5. Assume that the problem is NC-NC, |f(·, ·; ξ)| ≤ 1 for all ξ, and Assumptions 1 and 2 hold. Denote l = max{lf , lg} with max{αx, αy} ≤ c/k for constant c ≥ 0. Then, the generalization gap of SSGD 1 with K iterations is bounded by O(K cl cl+1 /m1).
Corollary 6. Assume that the problem is NC-NC, |f(·, ·; ξ)| ≤ 1 for all ξ, and Assumptions 1 and 2 hold. Let l = max{lf , lg} with max{αx, αy} ≤ c/k. Then, the generalization gap of TSGD 2 with K outer iterations and T inner iterations is bounded by O(T 1 Tcl+1K1− 1 Tcl+1 /(m1)).
Remark 6. By Theorem [1, 2, 3], we can derive the above corollaries on generalization gap from stability bounds. Corollary 5 and Corollary 6 show that extremely high number of iterations (K for SSGD and K,T for TSGD) will drastically reduce the stability of these algorithms and increase the generalization gap, which will make these algorithms increase the risk of overfitting. We will also verify it in the following experiments.
5 EXPERIMENTS
In this section, we empirically validate our previous theoretical results on real world datasets. Two experiments, including meta-learning and hyperparameter optimization, are conducted via Algorithm 2 TSGD (note that when T = 1, TSGD is just SSGD). Due to the space limitation, we just present the meta learning experiment here, leaving the hyperparameter optimization experiment and other details in the Appendix D.
5.1 META LEARNING
Consider the few-shot meta-learning problem with M tasks {Ti, i = 1, ...,M} sampled from distribution PT . We aim to learn a model that can rapidly adapt to different tasks. Firstly, the embedding model ϕ is shared by all tasks to learn embedded features. Secondly, the task-specific parameter wi is to adapt the shared embedding to its own sub-problem. Thus, the overall problem of meta-learning can be formulated as follow:
min ϕ LD (ϕ, w̄∗) = Eξ∈Dtei ,Ti [L (ϕ,w ∗ i ; ξ)] , (3a)
s.t. w̄∗ = argmin w̄
[ LDtr(ϕ, w̄) = ETi [ LDtri (ϕ,wi) ]] . (3b)
where Dtri and Dtei are the training and testing datasets for task Ti. Each wi is computed from one or more gradient descent updates from w̄ on the corresponding task (rapid adaptation), i.e., wi = w̄ − α∇w̄LDtr (ϕ,wi). In the inner level, the base learner optimizes the series of wi for each tasks (Equation 3b). In the outer level, the meta-learner optimizes the embedding model ϕ using the minimizers w∗i learned from the inner level and computes the loss from the testing dataset (Equation 3a).
Settings and Implementation We evaluate the behavior of the 5-way-1-shot task on the Omnilot dataset (Lake et al., 2015), i.e., it aims to classify 5 unseen classes from only 1 labeled sample. It contains 1623 different handwritten characters from 50 different alphabets. The image is in greyscale with a size 28 × 28. We follow similar settings in Ji et al. (2021). A five-layer fully-connected network is constructed, where the task-specific parameter wi corresponds to the last layer of the
network and the shared embedding model ϕ corresponds to all preceding layers. Thus, we train the two sets of layers separately in the outer and inner level of optimization. We build our model and establish our training using the software library learn2learn (Arnold et al., 2020). We follow the official train-validation-test partition and train ϕ, wi using the training set. The size of each layer in the network is 784 → 256 → 128 → 64 → 64 → 5. We set the number of tasks for training and testing set to 2000 and the batch size of tasks to 32. The learning rate of ϕ and wi are 0.002 and 0.01, respectively. Results are evaluated based on the average of 5 trial runs with different random seeds.
Results Evaluation Figure 1 presents the learning curves on training set, testing set and the generalization gap with different values of inner iterations T and outer iterations K. Generalization gap is estimated by the difference between training and testing loss. On one hand, it can be seen that the model easily overfits on the testing set as K increases drastically (Figure 1b) and the effect of T is very limited. On the other hand, with an appropriate value of K, smaller T (i.e) will result in underfitting on the testing loss (T = 1 in the Figure 1c causes highest generalization gap due to the underfitting training process). The trend of generalization gap in terms of K and T indicates that large values of iteration numbers will increase the risk of overfitting, which matches with our analysis in Theorem 4 that the stability of TSGD 2 will decrease drastically.
6 CONCLUSION
We give a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization framework. In particular, we establish a quantitative connection between generalization and algorithmic stability and provide the first generalization bounds of the continuous updates for inner parameters and outer parameters in multiple settings. Our experiments suggest that inappropriate iterations will cause underfitting and overfitting easily. The tendency of generalization gap also validates our theoretical results.
From the discussion in previous sections, we only discussed the first-order method, while there exist a number of estimating second-order and momentum-based approaches to solve the bilevel optimization problem. Dealing with the approximation of hypergradient in generalization analysis is another direction for future work.
A COMPARISON BETWEEN UD AND TSGD
Algorithm 3 Unrolled differentiation (UD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0 2: Output: xK , yK 3: for k = 0 to K − 1 do 4: y0k ← y0 5: for t = 0 to T − 1 do 6: yt+1k = y t k−αy∇yg(xk,ytk(xk);Dm2) 7: end for 8: xk+1 = xk − αx∇f(xk,yTk (xk);Dm1) 9: end for
10: return xK , yTK
Algorithm 4 Two-timescale SGD (TSGD) Input: number of iterations K, step sizes αx, αy , initialization x0,y0, Datasets: Dm1 , Dm2
Output: xK , yK for k = 0 to K − 1 do y0k ← yTk−1 for t = 0 to T − 1 do
yt+1k = y t k−αy∇yg(xk,ytk(xk);Dm2)
end for xk+1 = xk − αx∇f(xk,yTk (xk);Dm1)
end for return xK , yTK
B PROOF OF PRELIMINARIES
B.1 THE PROOF OF THEOREM 1
Proof of Part (a). Since ξ and ξi are drawn from the same distribution, we know EA[R(A(Dm1 , Dm2),D1)−Rs(A(Dm1 , Dm2), Dm1)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(Dm1 , Dm2), ξ)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(ξ, ξ2, .., ξi−1, ξi+1, ...ξm1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(D ′ m1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] ≤ β, where D′m1 and Dm1 differ in at most one sample ξi.
Proof of Part (b). Similarly, we have EA[f(A(Dm1 , Dm2),D1)− f(A(Dm1 , Dm2), Dm1)] = EA,ξiDm1 ,ξ∼D1 [f(A(Dm1 , Dm2), ξ)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(ξ, ξ2, .., ξi−1, ξi+1, ...ξm1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(D ′ m1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)]
≤ EA,ξi∈Dm1 ,ξ∼D1 [Lf∥A(D ′ m1 , Dm2)−A(Dm1 , Dm2)∥ ≤ Lfβ.
To prove high probability bounds, we need the following lemma on the concentration behavior on the summation of weakly dependent random variables. Lemma 7 (Bousquet et al. 2020). Let Z = (Z1, . . . , Zn) be a vector of independent random variables with each taking values in Z , and g1, . . . , gn be some functions gi : Zn → R such that the following holds for any i ∈ [n] :
• |E [gi(Z) | Zi]| ≤M a.s., • E [ gi(Z) | Z[n]\{i} ] = 0 a.s.,
• gi has a bounded difference β with respect to all variables except for the i-th variable.
Then, for any p ≥ 2, ∥∥∥∥∥ n∑
i=1
gi(Z) ∥∥∥∥∥ p ≤ 12 √ 2pnβ ⌈log2 n⌉+ 4M √ pn,
where the Lp-norm of a random variable Z is denoted by ∥Z∥p := (E[|Z|p])1/p, p ≥ 1.
Next, we state the following well-known relationship between tail bounds and moment bounds.
Lemma 8 (Bousquet et al. 2020; Vershynin 2018). Let a, b ∈ R+. Let Z be a random variable with ∥Z∥p ≤ √ pa+ pb and p ≥ 2. Then, for any δ ∈ (0, 1), we have, with probability at least 1− δ
|Z| ≤ e ( a √ log( e
δ ) + b log(
e δ ) ) .
Proof of Part (c). In order to make use of Lemma 7 to obtain the generalization bounds, we will introduce:
hi = Eξ′i∼D1 [Eξi∼D1 [f(A(D i m1 , Dm2 ; ξ))]− f(A(D i m1 , Dm2 ; ξi)],
where Dim1 = {ξ1, ξ2, ..., ξi−1, ξ ′ i, ξi+1, ..., ξm1}, and ξ′i obeys identical distribution of ξi.
Hence, we have:
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)|
= 1
m1 ∣∣∣ m1∑ i=1 (Eξ∼D1f(A(Dm1 , Dm2); ξ)− f(A(Dm1 , Dm2); ξi)) ∣∣∣
≤ 1 m1 ∣∣∣ m1∑ i=1 ( Eξ∼D1f(A(Dm1 , Dm2); ξ)− Eξ∼D1,ξ′i∼D1f(A(D i m1 , Dm2); ξ) ) ∣∣∣ +
∣∣∣∣∣ 1m1 m1∑ i=1 Eξ′i∼D1Eξ∼D1 [ f(A(Dim1 , Dm2); ξ) ] − f(A(Dim1 , Dm2); ξi) ∣∣∣∣∣ + 1
m1 ∣∣∣∣∣ m1∑ i=1 ( Eξ′i∼D1f(A(D i m1 , Dm2); ξi)− f(A(Dm1 , Dm2); ξi) )∣∣∣∣∣ . It then follows from the definition of uniform stability that
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)|
≤2β + ∣∣∣∣∣ 1m1 m1∑ i=1 Eξ′i∼D1Eξ∼D1 [ f(A(Dim1 , Dm2); ξ) ] − f(A(Dim1 , Dm2 ; ξi)) ∣∣∣∣∣ =2β + 1
m1 ∣∣∣∣∣ m1∑ i=1 hi ∣∣∣∣∣ . Notice that all conditions of 7 hold. Thus, the following outcome can be derived for any p ≥ 2:∥∥∥∥∥ m1∑ i=1 hi(ξ) ∥∥∥∥∥ p ≤ 12 √ 2pm1β ⌈log2 m1⌉+ 4M √ pm1.
Combining Lemma 7 and Lemma 8 with hi defined above, we have the following inequality with probability 1− δ: ∣∣∣∣∣ m1∑ i=1 hi(ξ) ∣∣∣∣∣ ≤ e( 4M√m1 √ log e δ + 12 √ 2β ⌈log2 m1⌉ √ log e δ ) .
The deviation bound now follows immediately:
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)| ≤ 2β + e ( 4M √ m1 √ log e δ + 12 √ 2β ⌈log2 m1⌉ √ log e δ ) .
The proof is completed.
C MAIN PROOF
C.1 APPROXIMATE EXPANSIVITY OF UPDATE RULES
With step size αx and αy , the update rules for single-timescale can be presented:
Gs ([ x y ]) := [ x− αx∇f(x,y) y − αy∇yg(x,y) ] .
Definition 5 (expansivity). An update rule is η-expansive if for every x, x′ ∈ Rd1 , y, y′ ∈ Rd2 : ∥G(x,y)−G (x′,y′)∥2 ≤ η √ ∥x− x′∥22 + ∥y − y′∥ 2 2.
Lemma 9. Suppose that Assumptions 1 and 2 hold for Problem (1). Then:
1. If f and g are non-convex functions, then Gs is (1+max{lfαx, lgαy})-expansive with step size αx, αy .
2. If f and g are convex functions, then Gs is ( √
2 + 2max{(lfαx)2, (lgαy)2})-expansive with step size αx, αy . 3. If f and g are strongly-convex with µf and µg respectively, then Gs is√ 2 (1− 2αx (µf + µg) + αx2l2)-expansive with step size:
(uf + µg)− √ (uf + µg) 2 − 0.5l2
l2 ≤ αx = αy
≤ min 1µf + µg , (uf + µg) + √ (uf + µg) 2 − 0.5l2 l2 . Proof. In Case 1 with the NC-NC objectives and the smoothness of objectives on Assumptions 1 and 2, we have∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥ = ∥∥∥∥[ x− x′ − αx (∇f(x,y)−∇f (x′,y′))y − y′ + αy (∇yg(x,y)−∇yg (x′,y′)) ]∥∥∥∥
≤ ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥+ ∥∥∥∥[ αx (∇f(x,y)−∇f (x′,y′))αy (∇yg(x,y)−∇yg (x′,y′)) ]∥∥∥∥
≤ (1 + max{lfαx, lgαy}) ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ . In case 2, with the monotonicity of the convex objective’s gradient, we have:
⟨x− x′, αx (∇f(x,y)−∇f (x′,y))⟩ ≥ 0 ⟨y − y′, αy (∇yg(x′,y)−∇yg (x′,y′))⟩ ≥ 0.
Thus, the stated result then follows:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2 [ x− x′y − y ]T [ αx (∇f(x,y)−∇f (x′,y)) αy (∇yg(x,y)−∇yg (x′,y)) ] ] + ∥∥∥∥[ αx (∇f(x,y)−∇f (x′,y))αy (∇yg(x,y)−∇yg (x′,y)) ]∥∥∥∥2
≤ max{(lfαx)2, (lgαy)2} ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + ∥x− x′∥2 . (4)
and∥∥∥∥Gs([ x′y ]) −Gs ([ x′ y′ ])∥∥∥∥2 = ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥2 − 2 [ x′ − x′y − y′ ]T [ αx (∇f(x′,y′)−∇f (x′,y)) αy (∇yg(x′,y′)−∇yg (x′,y)) ] ] + ∥∥∥∥[ αx (∇f(x′,y)−∇f (x′,y′))αy (∇yg(x′,y)−∇yg (x′,y′)) ]∥∥∥∥2
≤ max{(lfαx)2, (lgαy)2} ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥2 + ∥y − y′∥2 . (5)
Combining the above equations 6, 7 and inequality ( ∑k
i=1 ak) 2 ≤ k ∑k i=1 a 2 k, we can derive the
expansive of update rule Gs under convexity condition:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥2 ≤ (2 + 2max{(lfαx)2, (lgαy)2})∥∥∥∥[ x− x′y − y′ ]∥∥∥∥2 .
If f and g are strongly-convex, then, f̃(x,y) = f(x,y)− µf2 (∥x∥ 2 +∥y∥2) and g̃(x,y) = g(x,y)− µg 2 (∥x∥
2 + ∥y∥2) will be convex. With the above conclusions, we can derive the following:∥∥∥∥GT ([ xy ]) −Gs ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2αx [ x− x′y − y ]T [ (∇f(x,y)−∇f (x′,y)) (∇yg(x,y)−∇yg (x′,y))
] + αx 2 ∥∥∥∥[ (∇f(x,y)−∇f (x′,y))(∇yg(x,y)−∇yg (x′,y)) ]∥∥∥∥2
= (1− (αxµf + αxµg))2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + αx2 ∥∥∥∥[ (∇f̃(x,y)−∇f̃ (x′,y))(∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥2
− 2 (1− αxµf − αxµg)αx [ x− x′ y − y ]T [ (∇f̃(x,y)−∇f̃ (x′,y)) (∇yg̃(x,y)−∇yg̃ (x′,y)) ] ≤ ( 1− 2αx (µf + µg) + αx2l2 ) ∥x− x′∥2 .
The penultimate inequality arises from the smoothness of f̃ , g̃, which is based on our assumption for simplicity that l = max{lf , lg}, and the details will be revealed as follows:
l2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 ≥ ∥∥∥∥[ ∇f(x,y)−∇f (x′,y)∇yg(x,y)−∇yg (x′,y) ]∥∥∥∥2
= ∥∥∥∥∥ [ ( ∇f̃(x,y)−∇f̃ (x′,y) ) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2
+ 2 (µf + µg) [ x− x′ y − y ]T [ (∇f̃(x,y)−∇f̃ (x′,y)) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]
≥ ∥∥∥∥∥ [ ( ∇f̃(x,y)−∇f̃ (x′,y) ) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 .
Similar to the convex case, we can have:∥∥∥∥GT ([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥2 ≤ 2 (1− 2αx (µf + µg) + αx2l2) ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥2 .
C.2 SINGLE TIMESCALE
We first introduce the following lemma before providing the proof of the Theorem.
Lemma 10 (Hardt et al. (2016)). Consider two sequences of updates G1s, ..., GKs and
(G1s) ′, ..., (GKs ) ′ with initial points x0 = x′0, y0 = y ′ 0. Define δk = √ ∥xk − x′k∥ 2 + ∥yk − y′k∥
2. Then, we have:
δk+1 ≤ ηδk if Gks = (G k s) ′ is η-expansive min(η, 1)δk + 2σ if sup ∥∥∥∥[ xy ] −G ([ x y ])∥∥∥∥ ≤ σ Gks is η expansive
Proof. The first part of the inequality is obvious from the definition of expansivity and the assumption of Gks = (G k s) ′. For the second bound, note that:
δk+1 = ∥∥∥∥Gs([ xkyk ]) −G′s ([ x′k y′k ])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) − [ xk yk ] + [ x′k y′k ] −G′s ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ xk − x′kyk − y′k ]∥∥∥∥
≤ δk + ∥∥∥∥Gs([ xkyk ]) − [ xk yk ]∥∥∥∥+ ∥∥∥∥G′s([ x′ky′k ]) − [ x′k y′k ]∥∥∥∥ ≤ δk + 2σ.
Also, δk+1 can be further expressed as:
δk+1 = ∥∥∥∥Gs([ xkyk ]) −G′s ([ x′k y′k ])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) −Gs ([ x′k y′k ]) +Gs ([ x′k y′k ]) −G′s ([ x′k y′k
])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) −Gs ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ x′ky′k ] −Gs ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ x′ky′k ] −G′s ([ x′k yk′
])∥∥∥∥ ≤ ηδk + 2σ.
Combining the above completes the proof of the Lemma 10.
Now, we are ready to prove Theorem 2:
Proof of Part(a). Suppose that Dm1 and D ′ m1 are two neighboring sets differing only in one sample. Consider the updates G1s, ..., G K s and (G 1 s) ′, ..., (GKs ) ′. We can observe that the example chosen by the algorithm is the same in Dm1 , D ′ m1 at step k with probability 1−1/m1 and different with proba-
bility 1/m1. In the former case, we have identical update rules, while √ 1− 2αx (µf + µg) + α2xl2-
expansive can be employed in the latter through lemma 10. E [δk+1] ≤ ( 1− 1
m1
)( 2 ( 1− 2αx (µf + µg) + α2xl2 ))1/2 E [δk] + 1 m1 E [δk] + 1 m1 2 √ (αxLf )2 + (αxLg)2
≤ ( 2 ( 1− 2αx (µf + µg) + α2xl2 ))1/2 E [δk] + 2 m1 √ (αxLf ) 2 + (αxLg) 2
≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
k∑ i=0 ( 2 ( 1− 2αx (µy + µg) + α2xl2 ))i/2 ≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
∞∑ i=0 ( 2 ( 1− 2αx (µf + µg) + α2xl2 ))i/2 (1) ≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
∞∑ i=0 ( 1− 2αx (µf + µg) + α2xl2 + 0.5 )i = √ (αxLf )2 + (αxLg)2
m1 ( αx (µf + µg)− α 2 xl 2 2 + 0.25 )
=
√ L2f + L 2 g
m1 (µf + µg − (αxl)2/2 + 0.25) .
Here (1) comes from the mean equality √ ab ≤ (a + b)/2 for any a, b ≥ 0 and the assumption of (uf+µg)− √ (uf+µg) 2−0.5l2
l2 ≤ αx ≤ (uf+µg)+
√ (uf+µg)
2−0.5l2 l2 , which finishes the proof.
Proof of Part(b). The proof of Part(b) is analogous to the above, thus we use the same notations for this part.
E [δk+1] ≤ ( 1− 1
m1
)( 2 + 2max { l2fα 2 x, l 2 yα 2 y })1/2 E [δk] + 1 m1 E [δk] + 2 m1 √ L2fα 2 x + L 2 gα 2 y
= ( 2 + 2max { l2fα 2 x, l 2 gα 2 y })1/2 E [δk] + 2 √ L2fα 2 x + L 2 gα 2 y
m1 E [δk] ≤ 2 √ L2fα 2 x + L 2 gα 2 y
m1 ·
( 2 + 2max { l2fα 2 x, l 2 gα 2 y }) k+1 2 − 1√
2 + 2max { l2fα 2 x, l 2 gα 2 y } − 1
E [δk] ≤ O √ L2fα 2 x + L 2 gα 2 y ( 2 + 2max { l2fα 2 x, l 2 gα 2 y }) k+1 2
m1
.
To prove stability in the NC-NC case, we introduce the following lemma:
Lemma 11 (Hardt et al. (2016)). Assume that f(x,y; ξ) is Lf -Lipschitz continuous and 0 ≤ f(x,y; ξ) ≤ 1. Let Dm1 and D′m1 be two datasets differing in only one sample. Denote (xK ,yK) and (x′K ,y ′ K) as the output of K steps of SSGD (single-timescale algorithm) on Dm1
and D′m1 , respectively. Then, the following holds for every k ∈ {0, 1, ...,K}, where δk =√ ∥xk − x′k∥ 2 + ∥yk − y′k∥ 2:
E [|f (xk,yk; ξ)− f (x′k,y′k; ξ)|] ≤ k0 m1 + LfE [δk | δk0 = 0] .
Proof of Part(c). Applying Lemma 11, we get ready to prove the NC-NC case. Analogous to the previous case, we have:
E [δk+1] ≤ ( 1− 1
m1
)( 1 + cl
k
) E [δk] + 1
m
( 1 + cl
k
) E [δk] +
2c √ l2f + l 2 g
k
= ( 1 + cl
k
) E [δk] +
2c √ l2f + l 2 g
m1k .
The following can be derived:
E [δK | δk0 = 0] ≤ K∑
k=k0+1
T∏ t=k+1 ( 1 + cl t ) 2c√l2f + l2g m1k
≤ K∑
k=k0+1
T∏ t=k+1 { exp ( cl t )} 2c√l2f + l2g m1k
≤ K∑
k=k0+1
exp
( K∑
t=k+1
cl
t
) 2c √ l2f + l 2 g
m1k
≤ k∑
k=k0+1
exp(cl · log(K/k)) 2c √ l2f + l 2 g
m1k ≤ 2c √ l2f + l 2 g
m1
K∑ k=k0+1 k−cl−1
≤ 2 √ l2f + l 2 g
m1l
( K
k0
)cl .
Hence, Lemma 11 indicates:
E [|f(x, y)− f (x′, y′)|] ≤ k0 m1
+ 2Lf
√ l2f + l 2 g
m1l
( K
k0
)cl .
The right hand side is approximately minimized when k0 = ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1 .
Therefore, we have
β ≤ O ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1
m1cl for argument stability.
C.3 TWO-TIMESCALE SGD (TSGD)
C.3.1 STANDARD SETTINGS
With step size αx and αy , the update rule for two-timescale can be presented as:
GT ([ xk yk ]) := [ xk − αx∇f(xk,yTk ) yk − αy ∑T t=1∇yg(xk,ytk) ] .
Analogous to the single-timescale case, we first provide the expansivity of the update rules.
Lemma 12. Suppose that Assumptions 1 and 2 hold for Problem (1). Let αl = max{αxlf , 1+(αylg) 2
1−αylg } for simplicity sake and assume αylg ≤ 1. Then:
1. If f and g are non-convex functions, GT is (1 + αlT )-expansive.
2. If f and g are convex functions, GT is (1 + αl)-expansive with step size αx, αy .
3. If f and g are strongly-convex with µf and µg respectively, GT is 1 + αl-expansive with step size:
αx = αy ≤ 1
µf + µg .
Proof. In Case 1 with the NC-NC objectives by the triangle inequality, we have:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ ∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥+∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ The first item can be derived from:∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥ = ∥∥∥∥[ x− x′ − αx (∇f(x,y)−∇f (x′,y))y − y + αy∑Ty=1 (∇yg(x,yt)−∇yg (x′,yt)) ]∥∥∥∥
≤ (1 + αyT lg) ∥x− x′∥
The second item can be derived from:∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ = ∥∥∥∥∥ [ x′ − x′ − αx (∇f(x′,y)−∇f (x′,y′)) y − y′ + ∑T−1 t=0 αy ( ∇yg(x′,yt)−∇yg ( x′,yt ′ )) ]∥∥∥∥∥
≤ ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥+ ∥∥∥∥∥ [ αx (∇f(x′,y)−∇f (x′,y′))∑T−1 t=0 αy ( ∇yg(x′,yt)−∇yg ( x′,yt ′ )) ]∥∥∥∥∥
From the Lipschitz continuous, we have:
T−1∑ t=0 αy ( ∇yg ( x,yt ) −∇yg ( x,yt )) ≤ T−1∑ t=0 αylg ∥∥yt − yt∥∥
Now we consider the t-th update: αylg ∥∥yt − yt∥∥ = αylg ∥∥yt−1 − αy∇yg (x′,yt−1)− yt−1 + αy∇yg (x′,yt−1)∥∥
≤ αylg ∥∥∥yt−1 − (yt−1)′∥∥∥+ (αylg)2 ∥∥∥yt−1 − (yt−1)′∥∥∥
· · · ≤ (αylg)t ∥∥∥y0 − (y0)′∥∥∥+ (αylg)t+1 ∥∥∥y0 − (y0)′∥∥∥
According to the accumulation of the both side, we have: T−1∑ t=0 αylg ∥∥∥yt − (yt)′∥∥∥ ≤ αylg ∥∥∥y0 − (y0)′∥∥∥ ∥ T−1∑ t=1 [ (αylg) t ∥∥∥y0 − (y0)′∥∥∥+ (αylg)t+1 ∥∥∥y0 − (y0)′]∥∥∥
=
[ 1− (αylg)T
1− αylg +
(αylg) 2 − (αylg)T+1
1− αylg ]∥∥∥y0 − (y0)′∥∥∥ = [ 1− (αylg)T + (αylg)2 − (αylg)T+1
1− αylg ]∥∥∥y0 − (y0)′∥∥∥ ≤ 1 + (αylg) 2
1− αylg ∥∥y − (y)′∥∥
Let αl = max{αylg, 1+(αylg) 2
1−αylg }, then:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ (1 + Tαl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
In case 2, with the monotonicity of the convex objective’s gradient, we have:
⟨x− x′, αx (∇f(x,y)−∇f (x′,y))⟩ ≥ 0 ⟨y − y′, αy (∇yg(x′,y)−∇yg (x′,y′))⟩ ≥ 0.
Thus, the stated result then follows:∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2 [ x− x′y − y ]T [ αx (∇f(x,y)−∇f (x′,y))∑T−1 t=0 αy ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]
+ ∥∥∥∥∥ [ αx (∇f(x,y)−∇f (x′,y))∑T−1 t=0 αy ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
≤ max (lfαx)2, ( 1 + (αylg) 2
1− αylg
)2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + ∥x− x′∥2 . (6)
and the second decomposition can be obtained by the NC-NC case:∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ ( 1 + max{lfαx, 1 + (αylg) 2
1− αylg }
) ∥y − y′∥ . (7)
let αl = max{αxlf , 1+(αylg) 2 1−αylg }. Combining the above equations 6, 7 and inequality √ 1 + (αl)2 ≤ (1 + αl)2, then we can derive the expansive of update rule GT under convexity condition:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ (1 + αl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
If f and g are strongly-convex, then, f̃(x,y) = f(x,y)− µf2 (∥x∥ 2 +∥y∥2) and g̃(x,y) = g(x,y)− µg 2 (∥x∥ 2 + ∥y∥2) will be convex. Let αx = αy = α and denote αl = max{αxlf , 1+(αylg) 2
1−αylg }, we can derive the following with the conclusions from the convex case:∥∥∥∥GT ([ xy ]) −GT ([ x′ y
])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2αx [ x− x′y − y ]T [ (∇f(x,y)−∇f (x′,y))∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]
+ αx 2 ∥∥∥∥∥ [ (∇f(x,y)−∇f (x′,y))∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
= (1− (αxµf + αxµg))2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + αx2 ∥∥∥∥∥ [ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ]∥∥∥∥∥ 2
− 2 (1− αxµf − αxµg)αx [ x− x′ y − y ]T (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ ))
≤ ( 1− 2α (µf + µg) + α2l2 ) ∥x− x′∥2 .
The penultimate inequality arises from the smoothness of f̃ , g̃, which is based on our assumption for simplicity that l = max{lf , 1+(αylg) 2
(1−αylg)αy }, and the details will be revealed as follows:
l2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 ≥ ∥∥∥∥∥ [ ∇f(x,y)−∇f (x′,y)∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
= ∥∥∥∥∥∥ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1
t=0
( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ∥∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2
+ 2 (µf + µg) [ x− x′ y − y ]T (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ ))
≥ ∥∥∥∥∥∥ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1
t=0
( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ∥∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 .
Similar to the convex case, we can have:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥ ≤ (1 + αl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
Proof. Because the main proof of Lemma 12 is similar to that of Lemma 9, we omit it.
Next, we give a bound for the update rule GT and prepare to prove Theorem 3. Since g() is a lg-smooth function, we have:
g ( x,yt+1 ) ≤ g ( x,yt ) + 〈 ∇g ( x,yt ) ,yt+1 − yt 〉 +
lg 2 ∥∥yt+1 − yt∥∥2 . ≤ g ( x,yt ) − 〈 ∇g ( x,yt ) , αy∇g ( x,yt )〉 +
lg 2 ∥∥αy∇g (x,yt)∥∥2 ≤ g ( x,yt ) − αy ( 1− αylg
2 )∥∥∇g (x,yt)∥∥2 . The two sides are accumulated from t = 1 to t = T and we could derive the following by Cauchy–Schwarz inequality:
T∑ t=1 ∥∥∇g (x,yt)∥∥2 ≤ g (x,y1)− g (x,yT ) αy (2− αylg) ⇒
( T∑
i=1
∇g(x,yt) )2 ≤ T
T∑ i=1 ∇g2(x,yt)
≤ T (g (x,y1)− g (x,yT )) αy(2− αylg) .
Hence, the bound of GT equals to √ L2fα 2 x + ( T (g(x,y1)−g(x,yT ))
αy(2−αylg)
)2 . Now, we are ready to give the
proof of Theorem 3.
Proof of Part(a). Suppose that Dm1 and D ′ m1 are two neighboring sets differing in only one sample. Consider the updates G1T , ..., G | 1. What is the focus of the paper regarding bilevel optimization problems?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. Do you have any questions or concerns about the theoretical analysis and experimental results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Authors propose to study the stability of estimators based on bilevel optimization problems. Using the notion of algorithmic stability, authors managed to bound the generalization error. This analysis is proposed for usual bilevel optimization algorithms (single-timescale and two-timescale SGD).
Strengths And Weaknesses
Major concerns:
Even though authors provide a long list of points comparing their work against Bao et al 2021, I am not sure I understood the difference:
(1) "Their study is only for the hyper-parameter" I do not see how the class of problem you consider is more general
(4) Could you comment on why exactly their analysis needs a renitilization of the inner parameters?
(5) I agree, but you do not especially seem to provide extensive experimental results Could you comment on this?
Experimental part Except from the sentence "The trend of generalization error in terms of K and T matches with our analysis in Theorem 4", there are no links between the experiments and the proposed analysis. Would it be possible to obtain more quantitative results than "a trend"? I am not asking for SOTA experiments, but it would be nice to have quantitative experiments validating the provided theorems. For instance, would it be possible to provide experiments in the setting of Theorem 4? to compute the value of
β
, and check its variation as a function of T and K?
Clarity, Quality, Novelty And Reproducibility
Paper is rather clear and original |
ICLR | Title
On Stability and Generalization of Bilevel Optimization Problems
Abstract
(Stochastic) bilevel optimization is a frequently encountered problem in machine learning with a wide range of applications such as meta-learning, hyper-parameter optimization, and reinforcement learning. Most of the existing studies on this problem only focused on analyzing the convergence or improving the convergence rate, while little effort has been devoted to understanding its generalization behaviors. In this paper, we conduct a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization problem. We first establish a fundamental connection between algorithmic stability and generalization gap in different forms and give a high probability generalization bound which improves the previous best one from O( √ n) to O(log n), where n is the sample size. We then provide the first stability bounds for the general case where both inner and outer level parameters are subject to continuous update, while existing work allows only the outer level parameter to be updated. Our analysis can be applied in various standard settings such as strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC). Our analysis for the NC-NC setting can also be extended to a particular nonconvex-stronglyconvex (NC-SC) setting that is commonly encountered in practice. Finally, we corroborate our theoretical analysis and demonstrate how iterations can affect the generalization gap by experiments on meta-learning and hyper-parameter optimization.
N/A
√ n) to O(log n), where n is the
sample size. We then provide the first stability bounds for the general case where both inner and outer level parameters are subject to continuous update, while existing work allows only the outer level parameter to be updated. Our analysis can be applied in various standard settings such as strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC). Our analysis for the NC-NC setting can also be extended to a particular nonconvex-stronglyconvex (NC-SC) setting that is commonly encountered in practice. Finally, we corroborate our theoretical analysis and demonstrate how iterations can affect the generalization gap by experiments on meta-learning and hyper-parameter optimization.
1 INTRODUCTION
(Stochastic) bilevel optimization is a widely confronted problem in machine learning with various applications such as meta-learning (Finn et al., 2017; Bertinetto et al., 2018; Rajeswaran et al., 2019), hyper-parameter optimization (Franceschi et al., 2018; Shaban et al., 2019; Baydin et al., 2017; Bergstra et al., 2011; Luketina et al., 2016), reinforcement learning (Hong et al., 2020), and few-shot learning (Koch et al., 2015; Santoro et al., 2016; Vinyals et al., 2016). The basic form of this problem can be defined as follows
min x∈Rd1
R(x) = F (x,y∗(x)) := Eξ [f (x,y∗(x); ξ)]
s.t. y∗(x) = arg min y∈Rd2
{G(x,y) := Eζ[g(x,y; ζ)]} , (1)
where f : Rd1 × Rd2 → R and g : Rd1 × Rd2 → R are two continuously differentiable loss functions with respect to x and y. Problem (1) has an optimization hierarchy of two levels, where the outer-level objective function f depends on the minimizer of the inner-level objective function g.
Due to its importance, the above bilevel optimization problem has received considerable attention in recent years. A natural way to solve problem (1) is to apply alternating stochastic gradient updates with approximating ∇yg(x,y) and ∇f(x,y), respectively. Briefly speaking, previous efforts mainly examined two types of methods to perceive an approximate solution that is close to the optimum y∗(x). One is to utilize the single-timescale strategy (Chen et al., 2021; Guo et al., 2021; Khanduri et al., 2021; Hu et al., 2022), where the updates for y and x are carried out simultaneously. The other one is to apply the two-timescale strategy (Ghadimi & Wang, 2018; Ji et al., 2021;
Hong et al., 2020; Pedregosa, 2016), where the update of y is repeated multiple times to achieve a more accurate approximation before conducting the update of x.
While there is a long list of work on bilevel optimization, most of the existing work only focuses on either analyzing its convergence behaviors (Ghadimi & Wang, 2018; Hong et al., 2020; Ji et al., 2021) or improving its convergence rate, based on the convexity and the smoothness properties of f(·, ·) and/or g(·, ·) (Liu et al., 2020; Li et al., 2020). Contrarily, only little effort is devoted to understanding the generalization behavior of the problem. To the best of our knowledge, there is only one recent work on the generalization analysis for bilevel problems (Bao et al., 2021), which presents the first expected uniform stability bound. However, there are still several undesirable issues in this work: (1) Their result is only for the uniform stability (which could be deduced from argument stability with certain conditions, see Definition 4 for details), leaving the analysis of other stronger definitions of algorithmic stability open; (2) Additionally, the UD algorithm allows the outer level parameters to be updated continuously but needs to reinitialize the inner level parameters before each iteration in the inner loop, which is not commonly used in practice due to their inefficiency (see line 4 in Algorithm 3). (3) The proof of Theorem 2 in their work is unclear to show whether the update of outer level parameters is argument dependent on the inner level parameters, where may exist some gap in the analysis of UD algorithm (see Appendix E for detailed discussions). (4)Their experiments take only hyper-parameter optimization into consideration and neglect other applications in the bilevel optimization instances.
To address all the aforementioned issues, we give in this paper a thorough analysis on the generalization behaviors of first-order (gradient-based) methods for general bilevel optimization problem. We employ the recent advances of algorithmic stability to investigate the generalization behaviors in different settings. Specifically, our main contributions can be summarized as follows:
• Firstly, we establish a fundamental connection between generalization gap and different notations of algorithmic stability (argument stability and uniform stability) for any randomized bilevel optimization algorithms in both expectation and high probability forms. Specifically, we show that the high probability form of the generalization gap bound can be improved from O( √ n) to O(log n) compared with the result in Bao et al. (2021).
• Next, we present the stability bounds for gradient-based methods with either singletimescale or two-timescale update strategy under different standard settings. To the best of our knowledge, this work provides the first stability bounds for the two-timescale (double loop) algorithms, which allows the accumulation of the sub-sampled gradients in the inner level. In detail, we consider the settings of strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC), and further extend our analysis to a particular nonconvex-strongly-convex (NC-SC) setting that is widely appeared in practice. Table 1 is the summary of our main results.
• Thirdly, we provide the first generalization bounds for the case where both the outer and inner level parameters are subject to continuous (iterative) changes. Compared to the previous work (Bao et al., 2021), our work does not need the reinitialization step before each iteration in the inner level and hence our algorithm can carry over the last updated inner level parameters, which is more general and practical.
• Finally, we conduct empirical studies to corroborate our theories via meta-learning and hyperparameter optimization, which are two applications of bilevel optimization.
Due to space limitations, all the proofs and additional experiments are included in Appendix.
1.1 RELATED WORK
Research at the interface between generalization and the bilevel problem can be roughly classified into two categories. The first one includes all the research on bilevel optimization. In recent decades, extensive studies have been done on this topic, which suggests that bilevel optimization has a wide range of applications in machine learning such as hyper-parameter optimization (Franceschi et al., 2018; Lorraine & Duvenaud, 2018; Okuno et al., 2021), meta learning (Bertinetto et al., 2018; Rajeswaran et al., 2019; Soh et al., 2020) and reinforcement learning (Yang et al., 2018; Tschiatschek et al., 2019). Most of the existing work studies the problem from an optimization perspective. For example, Ghadimi & Wang (2018); Ji et al. (2021) provide the convergence rate analysis based on the nonconvex-strongly-convex assumption for the two functions f(·, ·) and g(·, ·). (Grazzi et al., 2020) considers the iteration complexity for hypergradient computation. (Liu et al., 2020; Li et al., 2020) present an asymptotic analysis for the convex-strongly-convex setting. Perhaps the most related one to ours from the generalization standpoint (i.e., the expectation of population risk and empirical risk) is Bao et al. (2021), while there may exist some gap in the analysis of UD algorithm. In this work, we employ a novel approach to examine the stability bounds of bilevel optimization problems. Firstly, our work analyzes the generalization behavior by observing how different settings can have an impact on the stability bounds directly. Secondly, our work adopts a stronger version of stability called argument stability, which can imply the previously used uniform stability if the function is sufficiently smooth. Furthermore, our work does not need to reinitialize the inner-level parameters and allows them to carry over their last updated parameters at each time updating the inner level. This indicates that y in the inner level is updated iteratively and depends on the current parameter of x, which is more common and efficient in practice.
The second category includes all the work on stability analysis. There is a long list of research on stability and generalization (Bousquet & Elisseeff, 2002; Mukherjee et al., 2006; Shalev-Shwartz et al., 2010). Bousquet & Elisseeff (2002) first introduces the notion of uniform stability and establishes the first framework of stability analysis. Hardt et al. (2016) later extends the stability analysis to iterative algorithms based on stochastic gradient methods for the vanilla stochastic optimization. After that, there are subsequent studies on generalization analysis for various problems via algorithmic stability, such as minmax problems (Lei et al., 2021; Farnia & Ozdaglar, 2021; Zhang et al., 2021) and pairwise learning (Yang et al., 2021; Lei et al., 2020; Xue et al., 2021; Huai et al., 2020). However, it is notable that due to the additional stochastic function in the constraint in the bilevel optimization, all the previous techniques and results cannot be applied to our problem. Although the generalization analysis of minmax optimization is somewhat similar to ours, it involves only one objective function f and a single level in algorithms for typical minmax optimization problems, while in the bilevel optimization algorithms there is an inner level and an outer level, which is considerably more challenging.
2 PRELIMINARIES
2.1 DEFINITIONS AND ASSUMPTIONS
In the following, we give some necessary definitions and assumptions that are widely used in bilevel optimization (Ghadimi & Wang, 2018; Ji et al., 2021; Khanduri et al., 2021) and generalization analysis (Hardt et al., 2016; Lei et al., 2021). Definition 1 (Joint Lipschitz Continuity). A function f(x,y) is jointly L-Lipschitz over Rd1 × Rd2 , if for all x ∈ Rd1 ,y ∈ Rd2 , the following holds, |f(x,y) − f(x′,y′)| ≤ L √ ∥x− x′∥22 + ∥y − y′∥ 2 2.
Definition 2 (Smoothness). A function f is l-smooth over a set S if for all u,w ∈ S the following is true, ∥∇f(u)−∇f(w)∥ ≤ l∥u− w∥. Definition 3 (Strong Convexity). A function f is µ-strongly-convex over a set S, if for all u,w ∈ S, the following holds, f(u) + ⟨∇f(u), w − u⟩+ µ2 ∥w − u∥
2 ≤ f(w). Assumption 1 (Inner-level Function Assumption). We assume the inner stochastic function g(x,y) in (1) satisfies the following: (i) g(x,y) is jointly Lg-Lipschitz for any x ∈ Rd1 and y ∈ Rd2 . (ii) g(x,y) is continuously differentiable and lg-smooth for any (x,y) ∈ Rd1 × Rd2 .
Assumption 2 (Outer-level Function Assumption). We assume the outer stochastic function f(x,y) in (1) satisfies the following: (iii) f(x,y) is jointly Lf -Lipschitz for any x ∈ Rd1 and y ∈ Rd2 . (iv) f(x,y) is continuously differentiable and lf -smooth for any (x,y) ∈ Rd1 × Rd2 .
2.2 PROBLEM FORMULATION
Given two distributions D1 and D2, in the (stochastic) optimization problem we aim to find the minimizer of Problem (1). However, since the distributions are often unknown, in practice we only have two finite-size datasets Dm1 = {ξi | i = 1, ...,m1} ∼ D m1 1 and Dm2 = {ζi | i = 1, ...,m2} ∼ Dm22 , where each ξi and ζi are i.i.d. sampled from D1 and D2, respectively. Based on these datasets, we will design some (randomized) algorithm A with output A(Dm1 , Dm2) = (x,y) ∈ Rd1 ×Rd2 . Our goal is to investigate the generalization behavior of such output. Note that although there are two stochastic functions in the bilevel optimization problem, we only care about the generalization of the outer-level one since it is the one that we prefer to minimize.
Below we define the generalization gap to measure the generalization behavior. Given distribution D1 and a finite data Dm1 ∼ D m1 1 , the population risk function R(x,y,D1) of x, y on D1 is defined as R(x,y,D1) := Eξ∼D1 [f (x,y(x); ξ)], and its empirical risk function on Dm1 is Rs(x,y, Dm1) = 1 m1 ∑m1 i=1 [f (x,y(x); ξi)]. Moreover, for a fixed hyperparameter x ∈ Rd1 and y(x) ∈ Rd2 ( note that y(x) might be dependent on x), we define the difference between the population risk and the empirical risk over (x,y(x)) as the bilevel generalization gap of (x,y(x)): Es[R(x,y) − Rs(x,y)], where Es denotes the expectation of Dm1 ∼ D m1 1 . When there is no ambiguity, we simplify thereafter the notations as follows: R(x,y,D1) = R(x,y) and Rs(x,y, Dm1) = Rs(x,y). Our goal is thus to analyze the bilevel generalization gap of the output of algorithm A(Dm1 , Dm2) based on Dm1 and Dm2 . Since the generalized error depends on the algorithm itself, in the following we will introduce the algorithms to be considered in this paper.
Most of the existing algorithms adopt the following idea: first approximate y∗ on Dm2 for a given parameter x in the inner level and then seek the hyperparameter x∗(Dm1 , Dm2) with corresponding hypothesis y∗(x∗(Dm1 , Dm2), Dm2) by the below estimation:
x̂(Dm1 , Dm2) ≈ argminx Rs(x, ŷ(x, Dm2), Dm1),
where ŷ(x, Dm2) ≈ argminy Gs(x,y, Dm2), (2)
where Gs(x,y, Dm2) is the empirical risk of G(x,y) over Dm2 , i.e., G(x,y, Dm2) = 1 m2 ∑m2 i=1 g (x,y(x); ζi). Most of the current gradient-based (first-order) algorithms for approximating (2) can be categorized into two classes: single-timescale methods and two-timescale methods. The single-timescale method performs the updates for y and x simultaneously via stochastic gradient descent (SGD), while the two-timescale method updates y multiple times before updating x (via stochastic gradient descent). As there are numerous approaches for both classes (see Related Work section for details), in this paper we will analyze the generalization behaviors for the most classical and standard one in each class, i.e., single-timescale SGD (SSGD; Algorithm 1) and twotimescale SGD (TSGD; Algorithm 2). There is a long list of work (Chen et al., 2021), (Ghadimi & Wang, 2018; Ji et al., 2021) based on either SSGD or TSGD.
3 GENERALIZATION AND STABILITY FOR BILEVEL OPTIMIZATION
Algorithmic stability is one of the classical approaches to analyzing the generalization bound for algorithms. Roughly speaking, the algorithmic stability of (randomized) algorithm A measures how the output of algorithm A changes if we change one data sample in the input dataset. While there are various notions of stability, most of the existing work on analyzing the stability of stochastic optimization, pairwise learning and minimax optimization focuses on the uniform-stability (Bousquet & Elisseeff, 2002) and the argument-stability (Liu et al., 2017; Lei & Ying, 2020). Thus, we also adopt these two notions of stability for the bilevel optimization problem. Briefly speaking, uniformstability focuses on the resulting change in population risk function, while the argument-stability considers the resulting change in arguments, i.e., the output of the algorithm. Definition 4 (Algorithmic Stability). Let A : Dm11 ×D m2 2 7→ Rd1×Rd2 be a randomized algorithm.
Algorithm 1 Single-timescale SGD (SSGD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0, Datasets Dm1 and Dm2
2: Output: xK , yK 3: for k = 0 to K − 1 do 4: Uniformly sample i ∈ [m2], j ∈ [m1] 5: yk+1 = yk − αy∇yg(xk,yk(xk); ζi) 6: xk+1 = xk − αx∇f(xk,yk(xk); ξj) 7: end for 8: return xK and yK
Algorithm 2 Two-timescale SGD (TSGD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0 2: Output: xK , yK 3: for k = 0 to K − 1 do 4: y0k ← yTk−1 5: for t = 0 to T − 1 do 6: Uniformly sample i ∈ [m2] 7: yt+1k = y t k − αy∇yg(xk,ytk(xk); ζi) 8: end for 9: Uniformly sample j ∈ [m1]
10: xk+1 = xk − αx∇f(xk,yTk (xk); ξj) 11: end for 12: return xK , yTK
(a) A is β-uniformly-stable if for all datasets Dm1 , D ′ m1 ∼ D m1 1 and Dm2 ∼ D m2 2 such that
Dm1 and D ′ m1 differ in at most one sample, we have the following for any ξ ∼ D1:
EA[|f(A(Dm1 , Dm2), ξ)− f(A(D′m1 , Dm2), ξ)|] ≤ β.
A is β-uniformly-stable with probability at least 1 − δ if we have the following for any ξ ∼ D1 with probability at least 1− δ:∣∣f(A(Dm1 , Dm2), ξ)− f(A(D′m1 , Dm2), ξ)∣∣ ≤ β.
(b) A is β-argument-stable in expectation if for all datasets Dm1 , D ′ m1 ∼ D m1 1 and Dm2 ∼
Dm22 such that Dm1 and D′m1 differ in at most one sample, we have:
EA[∥A(Dm1 , Dm2)−A(D′m1 , Dm2)∥2] ≤ β.
Note that the definition of uniform stability in expectation is the same as the definition in (Bao et al., 2021). Thus, our other definitions can be considered as extensions of the previous stability for bilevel optimization. In the following, we present Theorem 1 as our first result, which shows a crucial relationship between generalization gap and algorithmic stability for an algorithm A. Theorem 1. Let A : ξm1 × ζm2 7→ Rd1 × Rd2 be a randomized BO algorithm.
(a) If A is β-uniform-stable in expectation, then the following holds for Dm1 ∼ D m1 1 , Dm2 ∼
Dm22 : EA,Dm1 [R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))] ≤ β.
(b) If A is β-argument-stable in expectation and Assumption 2 holds, then the following holds for Dm1 ∼ D m1 1 , Dm2 ∼ D m2 2 :
EA,Dm1 [R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))] ≤ Lfβ.
(c) Assume that |f(x,y; ξ)| ≤ M for some M ≥ 0. If A is β-uniform-stable almost surely, then for Dm1 ∼ D m1 1 , Dm2 ∼ D m2 2 , the following holds with probability 1− δ:
|R(A(Dm1 , Dm2))−Rs(A(Dm1 , Dm2))| ≤ 2β + e (
4M √ m1
√ log e
δ + 12 √ 2β⌈log2 m1⌉
√ log e
δ ) where e is the base of the natural logarithms.
Remark 1. The above theorem suggests that the generalization gap can be controlled by several notions of algorithmic stability. Part (a) and Part (b) show that the expectation of generalization gap can be bounded by uniform stability and argument stability with the Lipschitz constant, respectively; Part (c) indicates that the generalization gap for the algorithm is no more than O(β log(m1) +
1/ √ m1) with probability 1 − δ. Compared with the existing work (Bao et al., 2021), Theorem 1 considers argument stability additionally, which is a stronger notion of stability than uniform stability (since uniform stability can be deduced from argument stability with the condition that the function is sufficiently smooth). Moreover, we use the McDiarmid’s inequality and the equivalence of tails and moments for the random variable with a mixture of sub-gaussian and sub-exponential tails (Lemma 1 in Bousquet et al. (2020)), which provide a significantly improved high probability bound in Part (c) (i.e., improving from O(β√m1) in Bao et al. (2021) to O(β logm1)).
4 STABILITY ANALYSIS FOR BILEVEL OPTIMIZATION ALGORITHMS
Motivated by Theorem 1, we can see that to analyze the generalization behaviors for any algorithm, it is sufficient to analyze its stability. As mentioned in the previous Section 2.2, we will consider the stability of SSGD and TSGD. For simplicity we let SC-SC denote the case where f and g both are strongly convex functions. C-C, NC-NC, and NC-SC are also denoted in a similar manner with ”C” representing convex function and ”NC” representing nonconvex function.
4.1 STABILITY BOUNDS FOR SINGLE-TIMESCALE SGD
As we can see from Algorithm 1, SSGD updates y and x simultaneously. In the following we develop stability bounds for this algorithm in different settings. Theorem 2. Suppose that Assumptions 1 and 2 hold and Algorithm A is SSGD with K iterations:
(a) Assume that Problem (1) is SC-SC with strongly convexity parameters µf and µg . Let αx = αy (see Lemma 9 for details) be the step sizes. Denote l = max{lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ((
L2f + L 2 g ) 1 2 ( m1 ( µf + µg − (αxl)2/2 + 0.25 ))−1) .
(b) Assume that Problem (1) is C-C. Let αx, αy be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m−11 √ (αxLf ) 2 + (αyLg) 2 ( 2 + 2max { (αxlf ) 2 , (αylg) 2 })K/2) .
(c) Assume that Problem (1) is NC-NC. Let the step sizes satisfy max {αx, αy} ≤ c/k for some constant c ≥ 0 and l = max {lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ( (m1cl) −1 ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1 ) ,
where lf , lg and Lf , Lg are smoothness constants and Lipschitz constants for f , g, respectively. Remark 2. Note that the above stability bounds are independent of the specific form of the objective function f(·, ·) and the exact form of the sample distribution D1, which are more reliant on the properties of the loss functions and sample size m1, and the stability bounds in the C-C and NC-NC cases are related to the number of iterations additionally. Specifically, Part(a) establishes a stability bound of O(1/m1) in the SC-SC setting and Part(b) considers a C-C case with a stability bound O(κK/21 /m1) related to the number of iterations and the data size, where κ1 is a constant. The NC-NC case is discussed in Part(c) which provides a stability bound ofO(K cl cl+1 /m1), where c is a constant to control the step size and l is the larger smoothness number of lf and lg . The conclusions here match the existing results in minmax problems (Lei et al., 2021; Farnia & Ozdaglar, 2021).
4.2 STABILITY BOUNDS FOR TWO-TIMESCALE SGD
Compared with the above SSGD, Two-timescale SGD (TSGD; Algorithm 2) always achieves more accurate approximate solutions by updating y multiple times before updating x. In this section, we extend our analysis from SSGD to TSGD. Particularly, compared with the results in Bao et al. (2021), we provide stability bounds in Theorem 3 for the case where the inner level parameter
(y) is updated iteratively (i.e., consistency). We further explore in Theorem 4 a particular NC-SC setting, which is commonly appeared in bilevel optimization applications such as meta learning and hyperparameter optimization. Theorem 3. Suppose that Assumptions 1 and 2 hold and |g(·, ·)| ≤ 1. Let A be the TSGD algorithm with K outer-iterations and T inner-iterations. Then we have
(a) Assume that Problem (1) is SC-SC. Let l = max{lf , 1+(αylg) 2
(1−αylg)αy } and α = αx = αy ≤ min{1/lg, 1/(µf + µg)} be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m1 −1 √ L2fα 2 x + ( 2T αy(2− αylg) )2 (1 + αl) K ) .
(b) Assume that Problem (1) is C-C. Let αl = max{αxlf , 1+(αylg) 2
1−αylg } and αx, αy ≤ 1 lg be the step sizes. Then, A is β-argument-stable in expectation, where
β ≤ O ( m−11 √ L2fα 2 x + ( 2T αy(2− αylg) )2 (1 + αl) K ) .
(c) Assume that Problem (1) is NC-NC. Let the step sizes satisfy max {αx, αy} ≤ c/k for some constant c ≥ 0 and l = max {lf , lg}. Then, A is β-argument-stable in expectation, where
β ≤ O ( (m1Tcl) −1 ( 2cLf √ l2f + T 2l2g ) 1 Tcl+1 ·K Tcl Tcl+1 ) .
Remark 3. Compared with the previous results for SSGD, the stability bounds of TSGD depend on the number of iterations in the outer level loop, the number of iterations in the inner level loop, and the data size in the outer level loop. If the step sizes are sufficiently small, we can see that the bounds in Theorem 3 are asymptotically the same as the bounds of SSGD in Theorem 2. Thus, Theorem 3 can be considered as a generalization of the previous one. The dependence on T also reveals our novelty compared with the existing work of stability analysis for other problems, such as simple SGD and minmax problems. To the best of our knowledge, this work provides the first stability bounds for the two-timescale (double loop) algorithms, which allows the accumulation of the sub-sampled gradients in the inner level. Remark 4. Comparing our results with the ones in (Bao et al., 2021), we have the following observations. 1) They only established the uniform stability bound for the Unrolled Differentiation algorithm 3, where the algorithm is reinitialized at each time entering the inner level loop, indicating that it takes into account the changes to only one parameter in the outer level loop, while our algorithm considers the update for both parameters. 2) Its proof needs to assume that the update of y in the inner level after the reinitialization will not be affected by the value specified for x. However, this assumption is quite uncommon and is probably the reason that they do not need to make any assumption on the inner level objective function (see Appendix E in details). In contrast, our work allows the inner level parameters to be updated consistently (i.e., carrying over the value in the last update), instead of being reinitialized at each time entering the inner level loop. Specifically, we allow yTk to be employed at the beginning of the (k+1)-th outer level iteration, rather than y0. This enables us to obtain different stability bounds for different inner level objective functions from a novel perspective.
In the following, we extend our analysis to a particular NC-SC setting that is frequently encountered in real-world applications and optimization analysis. Theorem 4. Suppose that Assumptions 1 and 2 hold, 0 ≤ f(·, ·) ≤ 1 and Problem (1) is NC-SC. Let A be the TSGD Algorithm with K outer-iterations and T inner-iterations with max {αx, αy} ≤ c/k for constant c ≥ 0. Denote l = max {lf , lg}. Then, A is β-uniform-stable in expectation, where
β ≤ O ( 2cLf √ l2f + l 2 gT 2 ) 1 c(Tl+l−µg)+1 ·K c(Tl+l−µg) c(Tl+l−µg)+1 (T l + l − µg + 2/c)
m1(T l + l − µg)
.
Remark 5. Compared with our previous analysis, we now sketch the technique differences in our analysis. We consider the bound of the term (δx,k, δy,k)T = (∥xk − x′k∥, ∥yk − y′k∥)T , while we employ δk = √ ∥xk − x′k∥ 2 2 + ∥yk − yk′∥22 in the previous analysis, where (xk,yk), (x′k,y ′ k) are the outputs of TSGD after k iterations for Dm1 and D ′ m1 respectively with Dm1 and D′m1 differing in one sample. In the NC-SC setting, we show that (δx,k+1, δy,k+1) T ≤ ((1 + αxl)δx,k, (1 + αxT l)δy,k) T (≤ means the entry-wise inequality), which means our term can be controlled. Then, we take the expectation of it to derive our uniform stability bound. To achieve the generalization gap over continuously changing parameters, it is imperative to take into account the growth of (δx,k, δy,k) instead of δx,k in (Bao et al., 2021). Appendix C.3 provides more details.
Thus, based on our previous results, we now provide the first generalization bounds in the NC-NC setting for both SSGD and TSGD. Corollary 5. Assume that the problem is NC-NC, |f(·, ·; ξ)| ≤ 1 for all ξ, and Assumptions 1 and 2 hold. Denote l = max{lf , lg} with max{αx, αy} ≤ c/k for constant c ≥ 0. Then, the generalization gap of SSGD 1 with K iterations is bounded by O(K cl cl+1 /m1).
Corollary 6. Assume that the problem is NC-NC, |f(·, ·; ξ)| ≤ 1 for all ξ, and Assumptions 1 and 2 hold. Let l = max{lf , lg} with max{αx, αy} ≤ c/k. Then, the generalization gap of TSGD 2 with K outer iterations and T inner iterations is bounded by O(T 1 Tcl+1K1− 1 Tcl+1 /(m1)).
Remark 6. By Theorem [1, 2, 3], we can derive the above corollaries on generalization gap from stability bounds. Corollary 5 and Corollary 6 show that extremely high number of iterations (K for SSGD and K,T for TSGD) will drastically reduce the stability of these algorithms and increase the generalization gap, which will make these algorithms increase the risk of overfitting. We will also verify it in the following experiments.
5 EXPERIMENTS
In this section, we empirically validate our previous theoretical results on real world datasets. Two experiments, including meta-learning and hyperparameter optimization, are conducted via Algorithm 2 TSGD (note that when T = 1, TSGD is just SSGD). Due to the space limitation, we just present the meta learning experiment here, leaving the hyperparameter optimization experiment and other details in the Appendix D.
5.1 META LEARNING
Consider the few-shot meta-learning problem with M tasks {Ti, i = 1, ...,M} sampled from distribution PT . We aim to learn a model that can rapidly adapt to different tasks. Firstly, the embedding model ϕ is shared by all tasks to learn embedded features. Secondly, the task-specific parameter wi is to adapt the shared embedding to its own sub-problem. Thus, the overall problem of meta-learning can be formulated as follow:
min ϕ LD (ϕ, w̄∗) = Eξ∈Dtei ,Ti [L (ϕ,w ∗ i ; ξ)] , (3a)
s.t. w̄∗ = argmin w̄
[ LDtr(ϕ, w̄) = ETi [ LDtri (ϕ,wi) ]] . (3b)
where Dtri and Dtei are the training and testing datasets for task Ti. Each wi is computed from one or more gradient descent updates from w̄ on the corresponding task (rapid adaptation), i.e., wi = w̄ − α∇w̄LDtr (ϕ,wi). In the inner level, the base learner optimizes the series of wi for each tasks (Equation 3b). In the outer level, the meta-learner optimizes the embedding model ϕ using the minimizers w∗i learned from the inner level and computes the loss from the testing dataset (Equation 3a).
Settings and Implementation We evaluate the behavior of the 5-way-1-shot task on the Omnilot dataset (Lake et al., 2015), i.e., it aims to classify 5 unseen classes from only 1 labeled sample. It contains 1623 different handwritten characters from 50 different alphabets. The image is in greyscale with a size 28 × 28. We follow similar settings in Ji et al. (2021). A five-layer fully-connected network is constructed, where the task-specific parameter wi corresponds to the last layer of the
network and the shared embedding model ϕ corresponds to all preceding layers. Thus, we train the two sets of layers separately in the outer and inner level of optimization. We build our model and establish our training using the software library learn2learn (Arnold et al., 2020). We follow the official train-validation-test partition and train ϕ, wi using the training set. The size of each layer in the network is 784 → 256 → 128 → 64 → 64 → 5. We set the number of tasks for training and testing set to 2000 and the batch size of tasks to 32. The learning rate of ϕ and wi are 0.002 and 0.01, respectively. Results are evaluated based on the average of 5 trial runs with different random seeds.
Results Evaluation Figure 1 presents the learning curves on training set, testing set and the generalization gap with different values of inner iterations T and outer iterations K. Generalization gap is estimated by the difference between training and testing loss. On one hand, it can be seen that the model easily overfits on the testing set as K increases drastically (Figure 1b) and the effect of T is very limited. On the other hand, with an appropriate value of K, smaller T (i.e) will result in underfitting on the testing loss (T = 1 in the Figure 1c causes highest generalization gap due to the underfitting training process). The trend of generalization gap in terms of K and T indicates that large values of iteration numbers will increase the risk of overfitting, which matches with our analysis in Theorem 4 that the stability of TSGD 2 will decrease drastically.
6 CONCLUSION
We give a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization framework. In particular, we establish a quantitative connection between generalization and algorithmic stability and provide the first generalization bounds of the continuous updates for inner parameters and outer parameters in multiple settings. Our experiments suggest that inappropriate iterations will cause underfitting and overfitting easily. The tendency of generalization gap also validates our theoretical results.
From the discussion in previous sections, we only discussed the first-order method, while there exist a number of estimating second-order and momentum-based approaches to solve the bilevel optimization problem. Dealing with the approximation of hypergradient in generalization analysis is another direction for future work.
A COMPARISON BETWEEN UD AND TSGD
Algorithm 3 Unrolled differentiation (UD) 1: Input: number of iterations K, step sizes αx,
αy , initialization x0,y0 2: Output: xK , yK 3: for k = 0 to K − 1 do 4: y0k ← y0 5: for t = 0 to T − 1 do 6: yt+1k = y t k−αy∇yg(xk,ytk(xk);Dm2) 7: end for 8: xk+1 = xk − αx∇f(xk,yTk (xk);Dm1) 9: end for
10: return xK , yTK
Algorithm 4 Two-timescale SGD (TSGD) Input: number of iterations K, step sizes αx, αy , initialization x0,y0, Datasets: Dm1 , Dm2
Output: xK , yK for k = 0 to K − 1 do y0k ← yTk−1 for t = 0 to T − 1 do
yt+1k = y t k−αy∇yg(xk,ytk(xk);Dm2)
end for xk+1 = xk − αx∇f(xk,yTk (xk);Dm1)
end for return xK , yTK
B PROOF OF PRELIMINARIES
B.1 THE PROOF OF THEOREM 1
Proof of Part (a). Since ξ and ξi are drawn from the same distribution, we know EA[R(A(Dm1 , Dm2),D1)−Rs(A(Dm1 , Dm2), Dm1)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(Dm1 , Dm2), ξ)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(ξ, ξ2, .., ξi−1, ξi+1, ...ξm1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(D ′ m1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] ≤ β, where D′m1 and Dm1 differ in at most one sample ξi.
Proof of Part (b). Similarly, we have EA[f(A(Dm1 , Dm2),D1)− f(A(Dm1 , Dm2), Dm1)] = EA,ξiDm1 ,ξ∼D1 [f(A(Dm1 , Dm2), ξ)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(ξ, ξ2, .., ξi−1, ξi+1, ...ξm1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)] = EA,ξi∈Dm1 ,ξ∼D1 [f(A(D ′ m1 , Dm2), ξi)− f(A(Dm1 , Dm2), ξi)]
≤ EA,ξi∈Dm1 ,ξ∼D1 [Lf∥A(D ′ m1 , Dm2)−A(Dm1 , Dm2)∥ ≤ Lfβ.
To prove high probability bounds, we need the following lemma on the concentration behavior on the summation of weakly dependent random variables. Lemma 7 (Bousquet et al. 2020). Let Z = (Z1, . . . , Zn) be a vector of independent random variables with each taking values in Z , and g1, . . . , gn be some functions gi : Zn → R such that the following holds for any i ∈ [n] :
• |E [gi(Z) | Zi]| ≤M a.s., • E [ gi(Z) | Z[n]\{i} ] = 0 a.s.,
• gi has a bounded difference β with respect to all variables except for the i-th variable.
Then, for any p ≥ 2, ∥∥∥∥∥ n∑
i=1
gi(Z) ∥∥∥∥∥ p ≤ 12 √ 2pnβ ⌈log2 n⌉+ 4M √ pn,
where the Lp-norm of a random variable Z is denoted by ∥Z∥p := (E[|Z|p])1/p, p ≥ 1.
Next, we state the following well-known relationship between tail bounds and moment bounds.
Lemma 8 (Bousquet et al. 2020; Vershynin 2018). Let a, b ∈ R+. Let Z be a random variable with ∥Z∥p ≤ √ pa+ pb and p ≥ 2. Then, for any δ ∈ (0, 1), we have, with probability at least 1− δ
|Z| ≤ e ( a √ log( e
δ ) + b log(
e δ ) ) .
Proof of Part (c). In order to make use of Lemma 7 to obtain the generalization bounds, we will introduce:
hi = Eξ′i∼D1 [Eξi∼D1 [f(A(D i m1 , Dm2 ; ξ))]− f(A(D i m1 , Dm2 ; ξi)],
where Dim1 = {ξ1, ξ2, ..., ξi−1, ξ ′ i, ξi+1, ..., ξm1}, and ξ′i obeys identical distribution of ξi.
Hence, we have:
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)|
= 1
m1 ∣∣∣ m1∑ i=1 (Eξ∼D1f(A(Dm1 , Dm2); ξ)− f(A(Dm1 , Dm2); ξi)) ∣∣∣
≤ 1 m1 ∣∣∣ m1∑ i=1 ( Eξ∼D1f(A(Dm1 , Dm2); ξ)− Eξ∼D1,ξ′i∼D1f(A(D i m1 , Dm2); ξ) ) ∣∣∣ +
∣∣∣∣∣ 1m1 m1∑ i=1 Eξ′i∼D1Eξ∼D1 [ f(A(Dim1 , Dm2); ξ) ] − f(A(Dim1 , Dm2); ξi) ∣∣∣∣∣ + 1
m1 ∣∣∣∣∣ m1∑ i=1 ( Eξ′i∼D1f(A(D i m1 , Dm2); ξi)− f(A(Dm1 , Dm2); ξi) )∣∣∣∣∣ . It then follows from the definition of uniform stability that
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)|
≤2β + ∣∣∣∣∣ 1m1 m1∑ i=1 Eξ′i∼D1Eξ∼D1 [ f(A(Dim1 , Dm2); ξ) ] − f(A(Dim1 , Dm2 ; ξi)) ∣∣∣∣∣ =2β + 1
m1 ∣∣∣∣∣ m1∑ i=1 hi ∣∣∣∣∣ . Notice that all conditions of 7 hold. Thus, the following outcome can be derived for any p ≥ 2:∥∥∥∥∥ m1∑ i=1 hi(ξ) ∥∥∥∥∥ p ≤ 12 √ 2pm1β ⌈log2 m1⌉+ 4M √ pm1.
Combining Lemma 7 and Lemma 8 with hi defined above, we have the following inequality with probability 1− δ: ∣∣∣∣∣ m1∑ i=1 hi(ξ) ∣∣∣∣∣ ≤ e( 4M√m1 √ log e δ + 12 √ 2β ⌈log2 m1⌉ √ log e δ ) .
The deviation bound now follows immediately:
|R(A(Dm1 , Dm2);D1)−Rs(A(Dm1 , Dm2);Dm1)| ≤ 2β + e ( 4M √ m1 √ log e δ + 12 √ 2β ⌈log2 m1⌉ √ log e δ ) .
The proof is completed.
C MAIN PROOF
C.1 APPROXIMATE EXPANSIVITY OF UPDATE RULES
With step size αx and αy , the update rules for single-timescale can be presented:
Gs ([ x y ]) := [ x− αx∇f(x,y) y − αy∇yg(x,y) ] .
Definition 5 (expansivity). An update rule is η-expansive if for every x, x′ ∈ Rd1 , y, y′ ∈ Rd2 : ∥G(x,y)−G (x′,y′)∥2 ≤ η √ ∥x− x′∥22 + ∥y − y′∥ 2 2.
Lemma 9. Suppose that Assumptions 1 and 2 hold for Problem (1). Then:
1. If f and g are non-convex functions, then Gs is (1+max{lfαx, lgαy})-expansive with step size αx, αy .
2. If f and g are convex functions, then Gs is ( √
2 + 2max{(lfαx)2, (lgαy)2})-expansive with step size αx, αy . 3. If f and g are strongly-convex with µf and µg respectively, then Gs is√ 2 (1− 2αx (µf + µg) + αx2l2)-expansive with step size:
(uf + µg)− √ (uf + µg) 2 − 0.5l2
l2 ≤ αx = αy
≤ min 1µf + µg , (uf + µg) + √ (uf + µg) 2 − 0.5l2 l2 . Proof. In Case 1 with the NC-NC objectives and the smoothness of objectives on Assumptions 1 and 2, we have∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥ = ∥∥∥∥[ x− x′ − αx (∇f(x,y)−∇f (x′,y′))y − y′ + αy (∇yg(x,y)−∇yg (x′,y′)) ]∥∥∥∥
≤ ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥+ ∥∥∥∥[ αx (∇f(x,y)−∇f (x′,y′))αy (∇yg(x,y)−∇yg (x′,y′)) ]∥∥∥∥
≤ (1 + max{lfαx, lgαy}) ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ . In case 2, with the monotonicity of the convex objective’s gradient, we have:
⟨x− x′, αx (∇f(x,y)−∇f (x′,y))⟩ ≥ 0 ⟨y − y′, αy (∇yg(x′,y)−∇yg (x′,y′))⟩ ≥ 0.
Thus, the stated result then follows:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2 [ x− x′y − y ]T [ αx (∇f(x,y)−∇f (x′,y)) αy (∇yg(x,y)−∇yg (x′,y)) ] ] + ∥∥∥∥[ αx (∇f(x,y)−∇f (x′,y))αy (∇yg(x,y)−∇yg (x′,y)) ]∥∥∥∥2
≤ max{(lfαx)2, (lgαy)2} ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + ∥x− x′∥2 . (4)
and∥∥∥∥Gs([ x′y ]) −Gs ([ x′ y′ ])∥∥∥∥2 = ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥2 − 2 [ x′ − x′y − y′ ]T [ αx (∇f(x′,y′)−∇f (x′,y)) αy (∇yg(x′,y′)−∇yg (x′,y)) ] ] + ∥∥∥∥[ αx (∇f(x′,y)−∇f (x′,y′))αy (∇yg(x′,y)−∇yg (x′,y′)) ]∥∥∥∥2
≤ max{(lfαx)2, (lgαy)2} ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥2 + ∥y − y′∥2 . (5)
Combining the above equations 6, 7 and inequality ( ∑k
i=1 ak) 2 ≤ k ∑k i=1 a 2 k, we can derive the
expansive of update rule Gs under convexity condition:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥2 ≤ (2 + 2max{(lfαx)2, (lgαy)2})∥∥∥∥[ x− x′y − y′ ]∥∥∥∥2 .
If f and g are strongly-convex, then, f̃(x,y) = f(x,y)− µf2 (∥x∥ 2 +∥y∥2) and g̃(x,y) = g(x,y)− µg 2 (∥x∥
2 + ∥y∥2) will be convex. With the above conclusions, we can derive the following:∥∥∥∥GT ([ xy ]) −Gs ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2αx [ x− x′y − y ]T [ (∇f(x,y)−∇f (x′,y)) (∇yg(x,y)−∇yg (x′,y))
] + αx 2 ∥∥∥∥[ (∇f(x,y)−∇f (x′,y))(∇yg(x,y)−∇yg (x′,y)) ]∥∥∥∥2
= (1− (αxµf + αxµg))2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + αx2 ∥∥∥∥[ (∇f̃(x,y)−∇f̃ (x′,y))(∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥2
− 2 (1− αxµf − αxµg)αx [ x− x′ y − y ]T [ (∇f̃(x,y)−∇f̃ (x′,y)) (∇yg̃(x,y)−∇yg̃ (x′,y)) ] ≤ ( 1− 2αx (µf + µg) + αx2l2 ) ∥x− x′∥2 .
The penultimate inequality arises from the smoothness of f̃ , g̃, which is based on our assumption for simplicity that l = max{lf , lg}, and the details will be revealed as follows:
l2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 ≥ ∥∥∥∥[ ∇f(x,y)−∇f (x′,y)∇yg(x,y)−∇yg (x′,y) ]∥∥∥∥2
= ∥∥∥∥∥ [ ( ∇f̃(x,y)−∇f̃ (x′,y) ) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2
+ 2 (µf + µg) [ x− x′ y − y ]T [ (∇f̃(x,y)−∇f̃ (x′,y)) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]
≥ ∥∥∥∥∥ [ ( ∇f̃(x,y)−∇f̃ (x′,y) ) (∇yg̃(x,y)−∇yg̃ (x′,y)) ]∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 .
Similar to the convex case, we can have:∥∥∥∥GT ([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥2 ≤ 2 (1− 2αx (µf + µg) + αx2l2) ∥∥∥∥[ x− x′y − y′ ]∥∥∥∥2 .
C.2 SINGLE TIMESCALE
We first introduce the following lemma before providing the proof of the Theorem.
Lemma 10 (Hardt et al. (2016)). Consider two sequences of updates G1s, ..., GKs and
(G1s) ′, ..., (GKs ) ′ with initial points x0 = x′0, y0 = y ′ 0. Define δk = √ ∥xk − x′k∥ 2 + ∥yk − y′k∥
2. Then, we have:
δk+1 ≤ ηδk if Gks = (G k s) ′ is η-expansive min(η, 1)δk + 2σ if sup ∥∥∥∥[ xy ] −G ([ x y ])∥∥∥∥ ≤ σ Gks is η expansive
Proof. The first part of the inequality is obvious from the definition of expansivity and the assumption of Gks = (G k s) ′. For the second bound, note that:
δk+1 = ∥∥∥∥Gs([ xkyk ]) −G′s ([ x′k y′k ])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) − [ xk yk ] + [ x′k y′k ] −G′s ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ xk − x′kyk − y′k ]∥∥∥∥
≤ δk + ∥∥∥∥Gs([ xkyk ]) − [ xk yk ]∥∥∥∥+ ∥∥∥∥G′s([ x′ky′k ]) − [ x′k y′k ]∥∥∥∥ ≤ δk + 2σ.
Also, δk+1 can be further expressed as:
δk+1 = ∥∥∥∥Gs([ xkyk ]) −G′s ([ x′k y′k ])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) −Gs ([ x′k y′k ]) +Gs ([ x′k y′k ]) −G′s ([ x′k y′k
])∥∥∥∥ ≤ ∥∥∥∥Gs([ xkyk ]) −Gs ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ x′ky′k ] −Gs ([ x′k y′k ])∥∥∥∥+ ∥∥∥∥[ x′ky′k ] −G′s ([ x′k yk′
])∥∥∥∥ ≤ ηδk + 2σ.
Combining the above completes the proof of the Lemma 10.
Now, we are ready to prove Theorem 2:
Proof of Part(a). Suppose that Dm1 and D ′ m1 are two neighboring sets differing only in one sample. Consider the updates G1s, ..., G K s and (G 1 s) ′, ..., (GKs ) ′. We can observe that the example chosen by the algorithm is the same in Dm1 , D ′ m1 at step k with probability 1−1/m1 and different with proba-
bility 1/m1. In the former case, we have identical update rules, while √ 1− 2αx (µf + µg) + α2xl2-
expansive can be employed in the latter through lemma 10. E [δk+1] ≤ ( 1− 1
m1
)( 2 ( 1− 2αx (µf + µg) + α2xl2 ))1/2 E [δk] + 1 m1 E [δk] + 1 m1 2 √ (αxLf )2 + (αxLg)2
≤ ( 2 ( 1− 2αx (µf + µg) + α2xl2 ))1/2 E [δk] + 2 m1 √ (αxLf ) 2 + (αxLg) 2
≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
k∑ i=0 ( 2 ( 1− 2αx (µy + µg) + α2xl2 ))i/2 ≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
∞∑ i=0 ( 2 ( 1− 2αx (µf + µg) + α2xl2 ))i/2 (1) ≤ 2 √ (αxLf ) 2 + (αxLg) 2
m1
∞∑ i=0 ( 1− 2αx (µf + µg) + α2xl2 + 0.5 )i = √ (αxLf )2 + (αxLg)2
m1 ( αx (µf + µg)− α 2 xl 2 2 + 0.25 )
=
√ L2f + L 2 g
m1 (µf + µg − (αxl)2/2 + 0.25) .
Here (1) comes from the mean equality √ ab ≤ (a + b)/2 for any a, b ≥ 0 and the assumption of (uf+µg)− √ (uf+µg) 2−0.5l2
l2 ≤ αx ≤ (uf+µg)+
√ (uf+µg)
2−0.5l2 l2 , which finishes the proof.
Proof of Part(b). The proof of Part(b) is analogous to the above, thus we use the same notations for this part.
E [δk+1] ≤ ( 1− 1
m1
)( 2 + 2max { l2fα 2 x, l 2 yα 2 y })1/2 E [δk] + 1 m1 E [δk] + 2 m1 √ L2fα 2 x + L 2 gα 2 y
= ( 2 + 2max { l2fα 2 x, l 2 gα 2 y })1/2 E [δk] + 2 √ L2fα 2 x + L 2 gα 2 y
m1 E [δk] ≤ 2 √ L2fα 2 x + L 2 gα 2 y
m1 ·
( 2 + 2max { l2fα 2 x, l 2 gα 2 y }) k+1 2 − 1√
2 + 2max { l2fα 2 x, l 2 gα 2 y } − 1
E [δk] ≤ O √ L2fα 2 x + L 2 gα 2 y ( 2 + 2max { l2fα 2 x, l 2 gα 2 y }) k+1 2
m1
.
To prove stability in the NC-NC case, we introduce the following lemma:
Lemma 11 (Hardt et al. (2016)). Assume that f(x,y; ξ) is Lf -Lipschitz continuous and 0 ≤ f(x,y; ξ) ≤ 1. Let Dm1 and D′m1 be two datasets differing in only one sample. Denote (xK ,yK) and (x′K ,y ′ K) as the output of K steps of SSGD (single-timescale algorithm) on Dm1
and D′m1 , respectively. Then, the following holds for every k ∈ {0, 1, ...,K}, where δk =√ ∥xk − x′k∥ 2 + ∥yk − y′k∥ 2:
E [|f (xk,yk; ξ)− f (x′k,y′k; ξ)|] ≤ k0 m1 + LfE [δk | δk0 = 0] .
Proof of Part(c). Applying Lemma 11, we get ready to prove the NC-NC case. Analogous to the previous case, we have:
E [δk+1] ≤ ( 1− 1
m1
)( 1 + cl
k
) E [δk] + 1
m
( 1 + cl
k
) E [δk] +
2c √ l2f + l 2 g
k
= ( 1 + cl
k
) E [δk] +
2c √ l2f + l 2 g
m1k .
The following can be derived:
E [δK | δk0 = 0] ≤ K∑
k=k0+1
T∏ t=k+1 ( 1 + cl t ) 2c√l2f + l2g m1k
≤ K∑
k=k0+1
T∏ t=k+1 { exp ( cl t )} 2c√l2f + l2g m1k
≤ K∑
k=k0+1
exp
( K∑
t=k+1
cl
t
) 2c √ l2f + l 2 g
m1k
≤ k∑
k=k0+1
exp(cl · log(K/k)) 2c √ l2f + l 2 g
m1k ≤ 2c √ l2f + l 2 g
m1
K∑ k=k0+1 k−cl−1
≤ 2 √ l2f + l 2 g
m1l
( K
k0
)cl .
Hence, Lemma 11 indicates:
E [|f(x, y)− f (x′, y′)|] ≤ k0 m1
+ 2Lf
√ l2f + l 2 g
m1l
( K
k0
)cl .
The right hand side is approximately minimized when k0 = ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1 .
Therefore, we have
β ≤ O ( 2cLf √ l2f + l 2 g ) 1 cl+1 ·K cl cl+1
m1cl for argument stability.
C.3 TWO-TIMESCALE SGD (TSGD)
C.3.1 STANDARD SETTINGS
With step size αx and αy , the update rule for two-timescale can be presented as:
GT ([ xk yk ]) := [ xk − αx∇f(xk,yTk ) yk − αy ∑T t=1∇yg(xk,ytk) ] .
Analogous to the single-timescale case, we first provide the expansivity of the update rules.
Lemma 12. Suppose that Assumptions 1 and 2 hold for Problem (1). Let αl = max{αxlf , 1+(αylg) 2
1−αylg } for simplicity sake and assume αylg ≤ 1. Then:
1. If f and g are non-convex functions, GT is (1 + αlT )-expansive.
2. If f and g are convex functions, GT is (1 + αl)-expansive with step size αx, αy .
3. If f and g are strongly-convex with µf and µg respectively, GT is 1 + αl-expansive with step size:
αx = αy ≤ 1
µf + µg .
Proof. In Case 1 with the NC-NC objectives by the triangle inequality, we have:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ ∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥+∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ The first item can be derived from:∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥ = ∥∥∥∥[ x− x′ − αx (∇f(x,y)−∇f (x′,y))y − y + αy∑Ty=1 (∇yg(x,yt)−∇yg (x′,yt)) ]∥∥∥∥
≤ (1 + αyT lg) ∥x− x′∥
The second item can be derived from:∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ = ∥∥∥∥∥ [ x′ − x′ − αx (∇f(x′,y)−∇f (x′,y′)) y − y′ + ∑T−1 t=0 αy ( ∇yg(x′,yt)−∇yg ( x′,yt ′ )) ]∥∥∥∥∥
≤ ∥∥∥∥[ x′ − x′y − y′ ]∥∥∥∥+ ∥∥∥∥∥ [ αx (∇f(x′,y)−∇f (x′,y′))∑T−1 t=0 αy ( ∇yg(x′,yt)−∇yg ( x′,yt ′ )) ]∥∥∥∥∥
From the Lipschitz continuous, we have:
T−1∑ t=0 αy ( ∇yg ( x,yt ) −∇yg ( x,yt )) ≤ T−1∑ t=0 αylg ∥∥yt − yt∥∥
Now we consider the t-th update: αylg ∥∥yt − yt∥∥ = αylg ∥∥yt−1 − αy∇yg (x′,yt−1)− yt−1 + αy∇yg (x′,yt−1)∥∥
≤ αylg ∥∥∥yt−1 − (yt−1)′∥∥∥+ (αylg)2 ∥∥∥yt−1 − (yt−1)′∥∥∥
· · · ≤ (αylg)t ∥∥∥y0 − (y0)′∥∥∥+ (αylg)t+1 ∥∥∥y0 − (y0)′∥∥∥
According to the accumulation of the both side, we have: T−1∑ t=0 αylg ∥∥∥yt − (yt)′∥∥∥ ≤ αylg ∥∥∥y0 − (y0)′∥∥∥ ∥ T−1∑ t=1 [ (αylg) t ∥∥∥y0 − (y0)′∥∥∥+ (αylg)t+1 ∥∥∥y0 − (y0)′]∥∥∥
=
[ 1− (αylg)T
1− αylg +
(αylg) 2 − (αylg)T+1
1− αylg ]∥∥∥y0 − (y0)′∥∥∥ = [ 1− (αylg)T + (αylg)2 − (αylg)T+1
1− αylg ]∥∥∥y0 − (y0)′∥∥∥ ≤ 1 + (αylg) 2
1− αylg ∥∥y − (y)′∥∥
Let αl = max{αylg, 1+(αylg) 2
1−αylg }, then:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ (1 + Tαl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
In case 2, with the monotonicity of the convex objective’s gradient, we have:
⟨x− x′, αx (∇f(x,y)−∇f (x′,y))⟩ ≥ 0 ⟨y − y′, αy (∇yg(x′,y)−∇yg (x′,y′))⟩ ≥ 0.
Thus, the stated result then follows:∥∥∥∥GT ([ xy ]) −GT ([ x′ y ])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2 [ x− x′y − y ]T [ αx (∇f(x,y)−∇f (x′,y))∑T−1 t=0 αy ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]
+ ∥∥∥∥∥ [ αx (∇f(x,y)−∇f (x′,y))∑T−1 t=0 αy ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
≤ max (lfαx)2, ( 1 + (αylg) 2
1− αylg
)2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + ∥x− x′∥2 . (6)
and the second decomposition can be obtained by the NC-NC case:∥∥∥∥GT ([ x′y ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ ( 1 + max{lfαx, 1 + (αylg) 2
1− αylg }
) ∥y − y′∥ . (7)
let αl = max{αxlf , 1+(αylg) 2 1−αylg }. Combining the above equations 6, 7 and inequality √ 1 + (αl)2 ≤ (1 + αl)2, then we can derive the expansive of update rule GT under convexity condition:∥∥∥∥GT ([ xy ]) −GT ([ x′ y′ ])∥∥∥∥ ≤ (1 + αl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
If f and g are strongly-convex, then, f̃(x,y) = f(x,y)− µf2 (∥x∥ 2 +∥y∥2) and g̃(x,y) = g(x,y)− µg 2 (∥x∥ 2 + ∥y∥2) will be convex. Let αx = αy = α and denote αl = max{αxlf , 1+(αylg) 2
1−αylg }, we can derive the following with the conclusions from the convex case:∥∥∥∥GT ([ xy ]) −GT ([ x′ y
])∥∥∥∥2 = ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 − 2αx [ x− x′y − y ]T [ (∇f(x,y)−∇f (x′,y))∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]
+ αx 2 ∥∥∥∥∥ [ (∇f(x,y)−∇f (x′,y))∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
= (1− (αxµf + αxµg))2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 + αx2 ∥∥∥∥∥ [ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ]∥∥∥∥∥ 2
− 2 (1− αxµf − αxµg)αx [ x− x′ y − y ]T (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ ))
≤ ( 1− 2α (µf + µg) + α2l2 ) ∥x− x′∥2 .
The penultimate inequality arises from the smoothness of f̃ , g̃, which is based on our assumption for simplicity that l = max{lf , 1+(αylg) 2
(1−αylg)αy }, and the details will be revealed as follows:
l2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 ≥ ∥∥∥∥∥ [ ∇f(x,y)−∇f (x′,y)∑T−1 t=0 ( ∇yg(x,yt)−∇yg ( x,yt ′ )) ]∥∥∥∥∥ 2
= ∥∥∥∥∥∥ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1
t=0
( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ∥∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2
+ 2 (µf + µg) [ x− x′ y − y ]T (∇f̃(x,y)−∇f̃ (x′,y))∑T−1 t=0 ( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ ))
≥ ∥∥∥∥∥∥ (∇f̃(x,y)−∇f̃ (x′,y))∑T−1
t=0
( ∇yg̃(x,yt)−∇yg̃ ( x,yt ′ )) ∥∥∥∥∥∥ 2 + (µf + µg) 2 ∥∥∥∥[ x− x′y − y ]∥∥∥∥2 .
Similar to the convex case, we can have:∥∥∥∥Gs([ xy ]) −Gs ([ x′ y′ ])∥∥∥∥ ≤ (1 + αl)∥∥∥∥[ x− x′y − y′ ]∥∥∥∥ .
Proof. Because the main proof of Lemma 12 is similar to that of Lemma 9, we omit it.
Next, we give a bound for the update rule GT and prepare to prove Theorem 3. Since g() is a lg-smooth function, we have:
g ( x,yt+1 ) ≤ g ( x,yt ) + 〈 ∇g ( x,yt ) ,yt+1 − yt 〉 +
lg 2 ∥∥yt+1 − yt∥∥2 . ≤ g ( x,yt ) − 〈 ∇g ( x,yt ) , αy∇g ( x,yt )〉 +
lg 2 ∥∥αy∇g (x,yt)∥∥2 ≤ g ( x,yt ) − αy ( 1− αylg
2 )∥∥∇g (x,yt)∥∥2 . The two sides are accumulated from t = 1 to t = T and we could derive the following by Cauchy–Schwarz inequality:
T∑ t=1 ∥∥∇g (x,yt)∥∥2 ≤ g (x,y1)− g (x,yT ) αy (2− αylg) ⇒
( T∑
i=1
∇g(x,yt) )2 ≤ T
T∑ i=1 ∇g2(x,yt)
≤ T (g (x,y1)− g (x,yT )) αy(2− αylg) .
Hence, the bound of GT equals to √ L2fα 2 x + ( T (g(x,y1)−g(x,yT ))
αy(2−αylg)
)2 . Now, we are ready to give the
proof of Theorem 3.
Proof of Part(a). Suppose that Dm1 and D ′ m1 are two neighboring sets differing in only one sample. Consider the updates G1T , ..., G | 1. What are the strengths and weaknesses of the paper regarding its contributions, extensions, and improvements compared to prior works?
2. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
3. What are the questions raised by the reviewer regarding the paper's analysis, claims, and comparisons with other works? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work studies the generalization of bilevel optimization problem. Specifically, this work analyzes the SSGD and TSGD bilevel optimization algorithms. This work adopts the notion of uniform stability on validation in expectation [1], and analyze the stability coefficient
β
to build a generalization bound of the two algorithms. This work also introduces the notion of argument stability (a special case of uniform stability), and provides a high probability bound for almost surely uniform stable algorithms.
Strengths And Weaknesses
Strength
This paper makes the following extensions compared to [1]:
This paper considers the stability bound of SSGD and TSGD algorithms, which is not studied in [1]
This paper also studies the case where the outer level problem is convex or strongly convex.
This paper provides a better high probability bound from
O
(
β
m
1
)
to
O
(
β
log
m
1
)
compared to [1]. The improvement is significant.
Weakness
There are some improper or incorrect claims on the prior work [1], upon which this work is built:
[1] studies the UD algorithm, and it seems that the author has some misunderstanding on the UD algorithm. Indeed, UD view
y
produced by the inner level optimization as a function of
x
, i.e.,
y
(
x
)
=
H
T
−
1
∘
H
T
−
2
∘
⋯
∘
H
0
(
x
)
, where
H
t
represents the gradient update in Line 6 of Algorithm 3. When UD optimizes
x
, it would use the gradient
∇
x
f
(
x
,
y
(
x
)
;
D
m
1
)
=
∇
x
f
(
x
,
y
;
D
m
1
)
+
∇
x
y
(
x
)
∇
y
f
(
x
,
y
;
D
m
1
)
, instead of
∇
x
f
(
x
,
y
;
D
m
1
)
in Line 8 of Algorithm 3. This means that UD would backward through the optimization trajectory of
y
. However, as shown in Remark 4, the author thinks UD treat
y
(
x
)
as an argument indpendent
x
, which is an incorrect claim. As a result, the claim "However, there are some technical flaws in their analysis..." in Section 1, and the claim "...and thus cannot be treated as an argument independent of
λ
t
′
, which is misused and causes technical flaws in their proof subsequently" in Remark 4 are both incorrect.
Since this work consider totally two different algorithms SSGD and TSGD other than UD. Perhaps it is less proper to state "for the SC-SC and C-C cases with single-timescale update strategy we significantly improve the generalization bounds compared with [1]", since the algorithm already changes.
This work claims that "[1] only considers a general setting for inner function" in Section 1. However, [1] also considers convex and strongly convex inner functions. These results are provided in Appendix C in [1].
This work claims that the bound of [1] "is quite loose in some cases" in Section 1. However, [1] constructs a worst case (see Appendix B in [1]) to prove that the bound of [1] is tight if no extra assumptions of inner or outer functions are provides. Besides, [1] also gives tighter bound when convex or strongly convex assumptions are made. Therefore, the claim that the bound of [1] "is quite loose in some cases" is improper.
This work claims that [1] has an undesirable issue "the stability for general bilevel optimization is still unknown". However, the problem formulation of bilevel optimization (see Eq.(2)) in this work is exactly the same as [1]. Therefore, the range of bilevel optimization considered in this work is not more general than [1] in theory, and this undesirable issue remains in this work. As a result, the claim "our work is the first thorough generalization analysis for general bilevel optimization problem" is improper.
Other questions:
The notion of argument stability is a special case of uniform stability, and it is obvious that a
β
-argument-stable (in expectation) algorithm is also a
L
f
β
-uniform-stable (in expectation) algorithm. Is it necessary to introduce this additional notion?
Theorem (1) c assumes the algorithm
A
is uniform stable almost surely. This assumption looks quite strong for random algorithms, since it requires for all possible randomness in the algorithm, changing a data point in the validation set won't cause the loss to change more than
β
. Can SSGD and TSGD satisfy this assumption? It seems that the author does not verify this assumption for the studied SSGD and TSGD algorithms, and does not establish high probability bounds for the two algorithms.
In Talbe 1, the TSGD with C-C setting has a
O
(
T
K
/
m
1
)
bound, and TSGD with NC-NC setting has a
O
(
T
1
−
κ
6
K
κ
6
/
m
1
)
bound. The former is much looser. Why a stronger assumption leads to a looser bound?
Typos:
In the second line below Eq.(2),
g
(
x
,
y
(
x
)
;
ξ
i
)
shoud be
g
(
x
,
y
;
ξ
i
)
.
In the third line in the seoncd paragraph of Section 2.2,
f
(
x
,
y
(
x
)
;
ξ
)
should be
f
(
x
,
y
;
ξ
)
.
In Line 7 of Algorithm 2,
x
k
t
should be
x
k
.
[1] Stability and Generalization of Bilevel Programming in Hyperparameter Optimization
[2] Train faster, generalize better: Stability of stochastic gradient descent
Clarity, Quality, Novelty And Reproducibility
There are some typos in paper and should be double checked. The technique in this work is relatively less novel, since its technique mainly follows [1,2]. |
ICLR | Title
Deformable Capsules for Object Detection
Abstract
Capsule networks promise significant benefits over convolutional networks by storing stronger internal representations, and routing information based on the agreement between intermediate representations’ projections. Despite this, their success has been mostly limited to small-scale classification datasets due to their computationally expensive nature. Recent studies have partially overcome this burden by locally-constraining the dynamic routing of features with convolutional capsules. Though memory efficient, convolutional capsules impose geometric constraints which fundamentally limit the ability of capsules to model the pose/deformation of objects. Further, they do not address the bigger memory concern of class-capsules scaling-up to bigger tasks such as detection or large-scale classification. In this study, we introduce deformable capsules (DeformCaps), a new capsule structure (SplitCaps), and a novel dynamic routing algorithm (SE-Routing) to balance computational efficiency with the need for modeling a large number of objects and classes. We demonstrate that the proposed methods allow capsules to efficiently scale-up to large-scale computer vision tasks for the first time, and create the first-ever capsule network for object detection in the literature. Our proposed architecture is a one-stage detection framework and obtains results on MS COCO which are on-par with state-of-the-art one-stage CNN-based methods, while producing fewer false positive detections.
1 INTRODUCTION
Capsule networks promise many potential benefits over convolutional neural networks (CNNs). These include practical benefits, such as requiring less data for training or better handling unbalanced class distributions (Jiménez-Sánchez et al., 2018), and important theoretical benefits, such as buildingin stronger internal representations of objects (Punjabi et al., 2020), and modeling the agreement between those intermediate representations which combine to form final object representations (e.g. part-whole relationships) (Kosiorek et al., 2019; Sabour et al., 2017). Although these benefits might not be seen in the performance metrics (e.g. average precision) on standard benchmark computer vision datasets, they are important for real-world applications. As an example, it was found by Alcorn et al. (2019) that CNNs fail to recognize 97% of their pose space, while capsule networks have been shown to be far more robust to pose variations of objects (Hinton et al., 2018); further, real-world datasets are not often as extensive and cleanly distributed as ImageNet or MS COCO.
These benefits are achieved in capsule networks by storing richer vector (or matrix) representations of features, rather than the simple scalars of CNNs, and dynamically choosing how to route that information through the network. The instantiation parameters for a feature are stored in these capsule vectors and contain information (e.g. pose, deformation, hue, texture) useful for constructing the object being modeled. Early studies have shown strong evidence that these vectors do in fact capture important local and global variations across objects’ feature components (or parts) within a class (Punjabi et al., 2020; Sabour et al., 2017). Inside their networks, capsules dynamically route their information, seeking to maximize the agreement between these vector feature representations and the higher-level feature vectors they are attempting to form.
Despite their potential benefits, many have remained unconvinced about the general applicability of capsule networks to large-scale computer vision tasks. To date, no capsule-based study has achieved classification performance comparable to a CNN on datasets such as ImageNet, instead relegated to smaller datasets such as MNIST or CIFAR. Worse still, to the best of our knowledge, no capsule network has shown successful results in object detection, a very important problem in computer
vision, robotics, and medical imaging. Now, the argument can be made that standard benchmark datasets such as ImageNet or MS COCO likely contain majority of objects in that 3% range of usual poses, and thus CNNs will appear to perform extremely well when measured in terms of accuracy, stripping capsule networks of one of their largest advantages. However, until capsule networks can perform on-par with CNNs on these typical object poses, few will care about the benefits of stronger internal representations and better generalization to unseen poses.
Summary of Our Contributions: (1) We propose the first ever capsule-based object detection framework in the literature. Our network is a one-stage (single-shot) architecture, where objects are both localized and classified using capsules, and can perform on-par with the state-of-the-art CNNs on a large-scale dataset (MS COCO). (2) We address the geometric constraint of convolutional capsules (and locally-constrained routing) by introducing deformable capsules, where parent capsules learn to adaptively sample child capsules, effectively eliminating rigid spatial restrictions while remaining memory efficient. (3) We design a new capsule-based prediction head structure, SplitCaps, which reformulates the projections of an objects’ instantiation parameters, presence, and class, eliminating the previous dimensional increase of capsules by the number of classes. This crucial addition enables the training of capsule networks on large-scale computer vision datasets for the first time in the literature. (4) To route information across SplitCaps’ unique structure, we introduce a novel Squeezeand-Excitation inspired dynamic routing algorithm, SE-Routing, which seeks to maximize agreement between child capsule projections, without the need of iterative loops
2 DEFORMABLE CAPSULES: FIXING LOCALLY-CONSTRAINED DYNAMIC ROUTING
The capsule network architecture proposed by Sabour et al. (2017) acted on global information, where digit capsules represented the pose and presence of digits in an image regardless of spatial location. The information from all children in the previous layer was sent to every parent in the following layer, weighted via the routing coefficients found in a cosine similarity routing algorithm. While this proved to be a highly-effective strategy, it was also computationally expensive, limiting its use to only small-scale datasets. Recent works attempted to scale up capsule networks to larger problems such as biomedical image segmentation (LaLonde & Bagci, 2018) or action detection in video (Duarte et al., 2018) by using convolutional capsules and locally-constraining the routing algorithm. Although efficient solutions were presented in those studies, the representation power of capsule networks was fundamentally limited due to imposing local constraints. This is because convolutions, by design, have a fixed geometric structure (Dai et al., 2017), and such a geometric constraint significantly inhibits capsules’ ability to model part-whole relationships, relying on parts of objects to fall within a fixed local grid. Therefore, it is unreasonable to expect a capsule to effectively represent the pose and deformations of an object when the information related to the parts of that object are locked into a fixed spatial relationship.
In this study, we propose to effectively solve this aforementioned problem by introducing a method that balances efficiency with the ability for a capsule to represent any pose and deformation of an object (i.e. where child capsules can be found in different spatial relationships to one another for the same parent). In such a formulation, global information is not explicitly required, but it does require parent capsules to have more flexibility over which child capsules they draw information from. Our proposed solution is deformable capsules. The idea behind the proposed algorithm is simple: if parent capsules are supposed to capture common deformations of the objects they represent within their vectors, then the choice of which children to aggregate information from must be handled in a deformable manner as well. Deformable capsules allow parents to adaptively gather projections from a non-spatially-fixed set of children, and thus effectively and efficiently model objects’ poses.
To achieve this overall goal, we follow the same efficient convolutional capsule paradigm, where projection vectors are formed via a convolution operation with a kernel centered on the parent capsules’ spatial location, but now we learn an additional set of weights for each parent capsule. These learnable weights are the same shape as each parent’s kernel, and represent the offset values for the spatial sampling of child capsules for that parent. Based on the child capsule representation vectors in the previous layer, these weights learn which children a parent capsule should adaptively sample from for a given input image. Dynamic routing then determines how to weight the information coming from each of these children based on their agreement for each projected parent.
Let us give a concrete example to better illustrate the parameter savings of this technique. Given a H = 128 by W = 128 grid of child capsules with ci = 32 capsule types of ai = 8 atoms each, being routed to a set of cj = 10 parent capsule types of aj = 16 atoms each, the fully-connected capsules of Sabour et al. (2017) would requireH×W×ci×ai×cj×aj ⇒ 128×128×32×8×10×16 ≈ 671M parameters for this layer alone (assuming the goal is classification, with detection requiring a multiplicative increase by the detection grid size). Instead, using our proposed deformable capsules with a k2 = 52 kernel, we only require 2× k× k× ai × cj × aj ⇒ 2× 5× 5× 8× 10× 16 ≈ 64K parameters. Convolutional capsules with locally-constrained routing require 32K parameters, not needing the additional spatial offsets kernel, but as mentioned above, they are fundamentally limited in the poses and deformations that they can represent. In our experiments, we found deformable capsules to converge faster and to much higher performance than convolutional capsules.
3 OBJECTS AS CAPSULES: SplitCaps WITH SE-Routing
We propose a novel one-stage (single-shot) capsule network architecture for object detection, called DeformCaps, where objects are detected, classified, and modeled with capsules. Our overall network architecture, shown in Fig. 1, is built upon CenterNet by Zhou et al. (2019a) who proposed to represent objects in images as scalar point values located at the center of their bounding boxes. The authors then regress the remaining characteristics of the object (e.g. height, width, depth) for each center-point detected. In our work, we follow the same center-point detection paradigm, but represent our objects with capsule vectors instead. Since several recent studies have found utilizing a CNN backbone before forming capsule types to be beneficial to overall performance (Duarte et al., 2018; Kosiorek et al., 2019; Tsai et al., 2020), we adopt the preferred backbone of CenterNet, DLA-34 (Yu et al., 2018), as this gave the best trade-off between speed and accuracy. Features extracted from the backbone are sent to our capsule object detection head and a bounding box regression head. In order to perform object detection using capsules, we introduce a new capsule structure, called SplitCaps, composed of class-agnostic and class-presence capsules, and a novel routing algorithm, called SE-Routing.
3.1 SPLITCAPS: CLASS-AGNOSTIC CAPSULES AND CLASS-PRESENCE CAPSULES
As discussed in Section 2, the original CapsNet by Sabour et al. (2017) was extremely expensive in computation and our proposed deformable capsules is a best possible solution to balance non-rigid deformations of objects while remaining memory efficient. However, there is a more significant memory hurdle capsule networks must overcome when scaling up to large-scale datasets, such as MS COCO. In their current implementation, capsule networks represent each class with its own parent
capsule vector. On small scale classification datasets (and using deformable routing), this is not an issue; it amounts to 2× k× k× ai × cj × aj parameters and N × ci × cj × aj × 4 bytes to store the intermediate representations to be routed for the parents, where cj is usually around 10 classes to represent. Let us suppose we have 5× 5 kernels with 32 input capsule types of 8 atoms per capsule, 10 output capsule types of 16 atoms per capsule, and a batch size of 32. In total, we would have 2× 5× 5× 8× 10× 16 = 64K parameters and 32× 32× 10× 16× 4 ≈ 655 KB. When we scale this up to object detection, we now need to store representations for every possible object location, and for MS COCO we need to represent 80 possible classes. This gives us 2× k × k×ai× cj ×aj ⇒ 2×5×5×8×80×16 = 512K parameters and N ×H×W × ci× cj ×aj ×4 bytes⇒ 32× 128× 128× 32× 80× 16× 4 ≈ 86 GB for the intermediate representations, where we assume the output grid of detections is 128 × 128 with a single detection (i.e. bounding box) predicted per class per location. The problem is not any better for large-scale classification datasets such as ImageNet either, where we lose the grid of predictions but grow to 1000 classes, which would require 2× 5× 5× 8× 1000× 16 = 6.4M parameters and 32× 32× 1000× 16× 4 ≈ 66 GB for the intermediate representations. Clearly, with most GPU memories limited to 12–24 GB, a solution is needed to be found for capsule networks to scale up to larger-scale computer vision tasks.
To overcome this issue, we propose a new type of capsule architecture, SplitCaps, to more efficiently scale capsule networks to large-scale computer vision tasks. SplitCaps contains two parent capsule types, each with a different number of atoms per capsule, for each location of the detection grid. As before, the idea is to balance efficiency with the ability to learn powerful representations. Towards this goal, SplitCaps proposes to divide up between its two parent capsules the tasks of (i) learning the instantiation parameters necessary to model the possible variations of an object and (ii) predicting which classes of objects are present in a given input. The first capsule type we refer is our classagnostic object instantiation capsules, and the second we refer is class presence capsules.
Class-agnostic object instantiation capsules: The purpose of these capsules is similar to those in previous works: model the possible variations (in pose, deformation, texture, etc.) of objects within a vector of instantiation parameters, the span of which should cover all possible variations for that object at test (hence why capsules are better than CNNs at generalizing to unseen poses). While previous capsule networks did this in a class-wise manner, we argue such a formulation is not required and possibly it is redundant. Many variations (e.g. rotation, skew, stroke thickness) may be class-independent, and thus to model these variations class-wise would require repetition across each capsule type. Instead, we propose to model all classes within a single capsule type (i.e. class-agnostic). In this way, while class-dependent variations would each require their own dimensions of the vector, class-independent variations can each be modeled along a single dimension for all possible objects. Since it is reasonable to assume there will be at least some class-specific variations, we increase the default capsule vector dimension from 16 to 64 to accommodate for possible class-specific instantiation parameters.
During training, if an object is present at a given spatial location, the 64-dimensional capsule vector for that location is fed to a reconstruction regularization sub-network to construct the mask of that object as similar to the reconstruction regularization used by Sabour et al. (2017) for classification. This sub-network is a relatively small and fast addition: a set of three ReLU-activated 1× 1 convolutional layers with 256 filters each, followed by a final sigmoid-activated 1 × 1 convolution with N = n2 = 282 = 784 filters, before reshaping outputs to n× n. Since objects’ scales vary dramatically, we scale normalize all objects’ ground-truth masks to be 28 × 28 (by following He et al. (2017)). Supervised training is conducted by computing the Dice loss (Milletari et al., 2016) between the predicted reconstruction, r, and the object’s mask, m
Lr = 2 ∑N i rimi∑N
i r 2 i + ∑N i m 2 i , (1)
where Lr is used to provide a regularization signal to the instantiation parameters being learned. Class presence capsules: The class presence capsules attempt to model which classes of objects are present in the input at each spatial location, if any. We accomplish this by setting the atoms per capsule to the number of classes being represented (i.e. 80 for MS COCO). Just as Hinton et al. (2018) separately modeled pose (with a matrix) and activation (with a scalar), this 80-dimensional vector can be viewed as a class-dependent set of activation values. The activation values are then passed through a sigmoid function and thresholded; if one or more activation values are above the
threshold, an object is determined to be at that spatial location with the strongest activated dimension determining the class label.
In order to produce a smooth loss function during training, we create a ground-truth heatmap by fitting a Gaussian distribution rather than a single point, to the center-point of each object’s bounding box, with variance proportional to the size of the box following Zhou et al. (2019a). More specifically, we create a heatmap H ∈ [0, 1]Xd ×Yd ×K containing each down-scaled ground truth center-point p̃ = ( px d , py d ) for class k ∈ K using a Gaussian kernelHxyk = exp ( − (x−p̃x) 2+(y−p̃y)2 2σ2p ) , where d is the amount of downsampling in the network and σp is an object-size-adaptive standard deviation (Law & Deng, 2018). In the case of overlapping Gaussians, we take the element-wise maximum. To handle the large class imbalance between objects and background in our heatmaps, we use a penalty-reduced pixel-wise logistic regression with a focal loss (Lin et al., 2017):
Lh = −1 P ∑ xyk
{ (1− Ĥxyk)α log(Ĥxyk) if Hxyk = 1,
(1−Hxyk)β(Ĥxyk)α log(1− Ĥxyk), otherwise; (2)
where α, β are hyper-parameters of the focal loss, P is the number of center-points in the input, used to normalize all positive focal loss instances to 1 (Zhou et al., 2019a). We use α = 2 and β = 4 in all our experiments, following Law & Deng (2018). At test, to efficiently retrieve the object’s exact center, we run a 3× 3 max-pooling over the thresholded spatial map. To predict the height and width of the bounding boxes of objects and recover the x, y offsets needed to map back to the upscaled image dimensions, we follow the same formulation as Zhou et al. (2019a) and pass the backbone features through a 3× 3 convolutional layer with 256 feature maps, then a 1× 1 convolutional layer with 2 feature maps. These layers predict the local offset, Ô ∈ RWd ×Hd ×2, and size prediction, Ŝ ∈ RWd ×Hd ×2, for each center-point and are supervised by
Lo = 1
P ∑ p ∣∣∣Ôp̃ − (p d − p̃ )∣∣∣ and Ls = 1 P ∑ p ∣∣∣Ŝp − (x2 − x1, y2 − y1)∣∣∣ , (3) respectively. Our final objective function is thus defined as L = Lh + λrLr + λsLs + λoLo. We keep λs = 0.1 and λo = 1 as done in Zhou et al. (2019a), and set λr = 0.1 initially, then step up to λr = 2.0 at the half-way in training.
3.2 SE-ROUTING: SPLITCAPS NEW ROUTING ALGORITHM
As a core component of capsule networks, dynamic routing seeks to maximize the agreement between child capsule projections for parent capsules and to fully-leverage the richer representations being stored. Since SplitCaps introduces a unique capsule head structure, where instantiation parameters and activations are split across different capsule types, previous dynamic routing algorithms can no longer be directly applied. To overcome this, we propose a new dynamic routing algorithm that takes inspiration from Squeeze-and-Excitation networks (Hu et al., 2018), which we call SE-Routing, and illustrated in Fig. 2.
Previously proposed dynamic routing algorithms (e.g. Sabour et al. (2017) and Hinton et al. (2018)) were typically iterative, requiring a hand-tuned loop of routing iterations, which proved to be slow and temperamental in practice. Different studies found different numbers of iterations to be effective, and one meta-study of five different iterative dynamic routing algorithms found them all to be largely ineffective (Paik et al., 2019). To avoid this pitfall, we propose a new routing algorithm to dynamically assign weights to child capsule projections based on their agreement, computed in a single forward pass using a simple gating mechanism with sigmoid activation. Unlike Sabour et al. (2017) which uses a routing softmax to force a one-hot mapping of information from each child to parents, our proposed SE-Routing learns a non-mutually-exclusive relationship between children and parents to allow multiple children to be emphasised for each parent.
Creating child capsule projection descriptors (squeeze): Following the Squeeze-and-Excitation paradigm, we first must compute the squeeze (i.e. a set of descriptors which summarize relevant information about each feature) to create a set of child capsule projection descriptors. In Hu et al. (2018), the authors proposed to use the global average activation of each channel with the goal of modeling channel interdependencies. In this study, our goal is to maximize the agreement
between child projections, for both the instantiation parameters and class presence of the object being modeled. With that motivation, we compute three separate descriptors which are fed into the excitation phase of the routing: (i) cosine angle between the mean projection vector and each child’s projection, which captures object instantiation agreement; (ii) Kullback–Leibler (KL) divergence of each child’s predicted class distribution and an aggregated distribution, which captures class presence agreement; and (iii) variance of each child’s predicted class distribution, which captures class presence uncertainty.
The cosine angle descriptor, a, is calculated in a similar manner to Sabour et al. (2017). A mean projection vector, ũ = 1/N ∑N i ûi, is first computed using the set of child capsule projections, Û = {û1, û2, ..., ûN}. Then we compute a set of cosine angles between each individual projection and this mean, a = {a1, a2, ..., aN}, where ai = (ũ · ûi)/(|ũ| · |ûi|). In a similar fashion, we compute a KL divergence descriptor, b, by first creating an aggregate object class distribution. To create this aggregate distribution, we follow the work of Clemen & Winkler (1999), insofar as each child capsule type is treated as an expert giving its prediction about the true underlying class distribution. First, we compute a simple linear opinion pool (Stone, 1961), p(z̃) =∑N i σs(zi)/N , where p(z̃) is the aggregated probability distribution, Z = {z1, z2, ...,zN} is the
set of child class presence projection vectors, and σs(zi)j = ezij/ ∑K k e
zik for j = {1, ...,K}, i = {1, ..., N} is the softmax function used to transform projection vectors into normalized probability distributions over the K classes. Then, we measure the agreement between each child’s predicted distributions, σs(zi), and the aggregate distribution, p(z̃), as the KL divergence between them bi = ∑K k p(z̃k) log(p(z̃k)/σs(zi)k).
Lastly, we take our child capsules’ predicted distributions, σs(zi), and compute their variance to estimate the uncertainty each child has: ci = ∑K k (σs(zi)k − ∑K k σs(zi)k)
2. Our three sets of descriptors are efficiently computed for all capsules simultaneously (i.e. for entire batch and across spatial locations) on GPU in parallel with matrix operations. They are then concatenated, s = a⊕ b⊕ c, and fed to the excitation layers of our routing mechanism (Fig. 2). Determining routing coefficients (excitation): The excitation stage of the SE-Routing algorithm has the task of learning a mapping from the concatenated set of capsule descriptors, s, into a set
of routing coefficients for the child capsule projections. Since parents capsules types are no longer different classes in our formulation, but rather two separated aspects of modeling objects, we compute a single set of routing coefficients at each spatial location for both parents. Formally, this mapping is computed as r = σ(W2δ(W1s)), where W1 ∈ R 3N t ×3N , W2 ∈ RN× 3N t , δ is the ReLU activation function, σ is the sigmoid activation function, and t is the reduction ratio used to form this mapping into a two fully-connected (FC) layer bottleneck. A brief note: although excitation routing has interesting parallels to self-attention (e.g. dynamically conditioned on the input), our learned mapping is non-mutually-exclusive, while self-attention and CapsNet’s dynamic routing both rely on applying a softmax function over outputs.
Finally, with the determined routing coefficients r = {r1, r2, ..., rN}, we can compute the output of the SplitCaps detection head. Projection vectors from each child to each parent are computed using the proposed deformable capsules (as described in Section 2). These projections are then combined and weighted by the routing coefficients to form the final parent capsules. These final parents contain the instantiation parameters, vobj = ∑N i riûobj|i, and class presence, vcls = ∑N i riûcls|i, of any objects being represented at the given spatial location within the detection grid.
4 DEFORMABLE CAPSULES ON MS COCO
We evaluated our deformable capsule object detection framework on the MS COCO dataset (Lin et al., 2014), which contains 118K training, 5K validation and 20K hold-out testing images. Average precision (AP) is reported over all IOU thresholds and at thresholds 0.5 (AP50) and 0.75 (AP75). We followed the training procedure proposed in Zhou et al. (2019a), training on 512 × 512 pixel inputs, yielding 128× 128 detection grids, using random flip, random scaling (between 0.6 to 1.3), cropping, and color jittering as data augmentation, and Adam (Kingma & Ba, 2014) to optimize our objective function. Due to limited compute resources, we initialized the backbone network weights from CenterNet and only train for 40 epochs with a batch size of 12 and learning rate of 5e-4 with 5× drops at 5, 15, and 25 epochs. Longer training would likely yield superior results, as found by Zhou et al. (2019a) who obtained better results for CenterNet when increasing from 140 to 230 epochs.
In Table 1, we provide results of our proposed deformable capsule network with and without flip and multi-scale augmentations following Zhou et al. (2019a). Inference time on our hardware (Intel Xeon E5-2687 CPU, Titan V GPU, Pytorch 1.2.0, CUDA 10.0, and CUDNN 7.6.5) was consistent with those reported by Zhou et al. (2019a).1 While DeformCaps performs slightly worse than CenterNet in terms of AP, it does so while producing far fewer false positive detections, as shown in Table 2 in Appendix A.2. For ablations, we trained a version of DeformCaps which replaces the proposed deformable capsules with the standard locally-constrained convolutional capsules (nonDeformCaps), and a version which removed the routing procedures (No-Routing). These ablations show the contribution of each component of the proposed method.
1We will make our code publicly available for the community for reproducible research.
5 RELATED WORKS
5.1 CAPSULE NETWORKS
The idea of capsules was first introduced by Hinton et al. (2011). Sabour et al. (2017) extended this and proposed dynamic routing between capsules. The EM routing algorithm was then modified by Hinton et al. (2018). Recently, capsule networks could achieve the state-of-the-art performance for a wide range of applications: video object segmentation (Duarte et al., 2019), point cloud segmentation (Zhao et al., 2019), explainable medical diagnosis (LaLonde et al., 2020), text classification (Zhao et al., 2018), sentiment analysis (Wang et al., 2018), and various other applications (Vijayakumar, 2019).
5.2 OBJECT DETECTION
Region proposal-based approaches: R-CNN was one of the first successful deep object detectors, in which a selective search algorithm was used to select a number of region proposals, CNN features were extracted from each of the region proposals and were used to both classify the object and regress its bounding box (Girshick et al., 2014). The later addition of Fast R-CNN (Girshick, 2015) provided end-to-end training and addressed the speed and efficiency issues of R-CNN.
Anchors-based approaches: Anchors-based approaches sample fixed-shape bounding boxes (anchors) around a low-resolution image grid, then attempt to classify anchors into object classes. Faster R-CNN (Ren et al., 2015) generates region proposals in a first stage network, then attempts to classify and regress bounding boxes for the top-k highest scoring anchors in a second stage network. Later studies such as Redmon et al. (2016) dramatically speed up the process by converting the proposal classifier to a multi-class one-stage detector. Since then, researchers have been working on improving one-stage detectors by including shape priors (Redmon & Farhadi, 2017; 2018), multiple feature resolutions (Liu et al., 2016), re-weighting the loss among different samples (Lin et al., 2017), or modeling channel-wise attention (Chen et al., 2020).
Keypoint estimation-based approaches: CornerNet (Law & Deng, 2018) attempts to detect objects by predicting two bounding box corners as keypoints. ExtremeNet (Zhou et al., 2019b) extends CornerNet’s approach by estimating all corners and the center of the objects’ bounding box. However, these methods rely on significantly slow combinatorial grouping post-processing stage. Zhou et al. (2019a) proposed CenterNet which attempts to predict only an objects’ center point, and regress all other necessary values from there without the need for grouping or post-processing.
6 DISCUSSIONS, LIMITATIONS, & FUTURE WORK
Our proposed deformable capsules (DeformCaps) with SplitCaps object-class representations and Squeeze-and-Excitation inspired SE-Routing algorithm represents an important step for capsule networks to scale-up to large-scale computer vision problems, such as object detection or large-scale classification. Our proposed one-stage object detection capsule network is able to obtain results on MS COCO which are on-par with other state-of-the-art one-stage CNN-based networks for the first time in the literature, while also producing fewer false positives. Examining the qualitative results, provided in Appendix A.3, lends empirical evidence that DeformCaps can better generalize to unusual poses/viewpoints of objects than CenterNet (Zhou et al., 2019a). We hope our work will inspire future research into the considerable potential of capsule networks.
Limitations: Our study contains some limitations, discussed in greater detail in Appendix A.1. Briefly, (1) we had difficulty integrating the bounding box regression values into our capsule object detection head; (2) the choice of descriptors used in the squeeze is somewhat handcrafted, and is open to further investigation; (3) the choice of dimensions to model the class-agnostic instantiation parameters of objects was chosen semi-arbitrarily and could likely improve from fine-search; and (4) the choice of reconstructing objects’ masks versus image patches is not thoroughly explored.
Future directions: The reconstruction sub-network of DeformCaps could possibly be trained to produce a fast single-shot instance segmentation framework. At test, potentially detected objects could have their instantiation vectors reconstructed into objects’ masks, then these masks would simply be resized to the predicted bounding-boxes, similar to He et al. (2017) but without needing to have the initial reshape and ROI alignment required in their two-stage approach.
A APPENDIX
A.1 EXTENDED EXPLANATIONS OF LIMITATIONS AND POSSIBLE RECOMMENDATIONS
In the discussion section of the main body of our paper, we mentioned four potential limitations in our study. We would like to discuss these in a bit more detail here. Since so many components of our method are newly introduced, there is a wide range of choices which could be investigated and improved by future researchers and engineers, and we suggest a few of those here:
(1) We had difficulty in integrating the bounding box regression values into our capsule object detection head. In our implementation, the class-agnostic capsules are trained to predict scalenormalized masks of 28× 28. Ultimately, we would like to integrate predicting the object masks and the boxes for those masks together, as these tasks surely share mutual information. However, to the best of our knowledge, no published works exist for using capsules on a real-valued regression task.
(2) For our proposed SE-Routing, as with the original Squeeze-and-Excitation network, the choice of descriptors computed in the squeeze is somewhat handcrafted. We propose to use the cosine angle, KL divergence, and variance, and provide justifications for each of these choices, then allow the excitation to learn which of these pieces of information is most beneficial dynamically for each given input. Nonetheless, it is completely plausible that different descriptors could yield superior results. We unfortunately do not have the compute resources to run ablation studies over each of these chosen descriptors individually.
(3) The choice of 64 dimensions to model the class-agnostic instantiation parameters was decided somewhat empirically. As we argued in the main paper, it is unlikely that all variations across object poses are completely class independent; thus, to represent these extra dimensions of variation, we increase our vector lengths considerably (16→ 64). However, it is possible that the number of classindependent and class-dependent variations is significantly higher or lower than the value chosen, and largely will depend on the complexity of the data being modeled. This difficulty is analogous to determining the optimal number of convolutional filters to use at every given layer of a CNN. Related to this, there is the potential for the class-dependent dimensions of the instantiation vectors to have unwanted influence over the cosine angle descriptors when attempting to represent objects of other classes. It could be beneficial to pass class information from the class presence capsule type over to the object instantiation capsule type to dynamically attend to the relevant dimensions of its vector for a given object. In a similar manner, it could be beneficial when computing the probability aggregation using the linear opinion pool to weight the expert opinions in proportion to their uncertainty instead of uniformly.
(4) We chose to reconstruct object’s masks with the motivation of forcing the network to learn variations in shape, pose, and deformations. Since CNNs are known to be biased to texture information
over shape, we chose not to explicitly supervise the learning of any texture information. Nonetheless, it is plausible that reconstructing the object with texture could yield superior performance. Further, we chose to set the value of the reconstruction regularization’s contribution to the loss to 0.1, following what was found most beneficial by CenterNet (Zhou et al., 2019a) for weighting the size loss contribution, and from a concern to not over-regularize the network early in training, then stepped this value to 2.0 half-way through training to make its value roughly equal to the other loss terms. From our experience, the accuracy remained fairly consistent across values up to 2.0 for this term, while setting its weight to 0.0 resulted in a degradation of performance. We found that increasing the value during training led to faster improvements in performance, consistent with other works in the literature that use such a regularization term. Engineering efforts on this parameter, such as a temperature function to automatically increase this weight during training, may prove beneficial if the goal is to reach the maximum possible accuracy.
A.2 ANALYSIS OF FALSE POSITIVES
DeformCaps tends to be more conservative with its detections than CenterNet. This can be observed both by the slightly lower confidence scores (typically 0.1 less than CenterNet for most detections), and by the overall fewer amount of boxes placed in scenes. CenterNet tends to produce far more false positives than DeformCaps, both in the case of incorrect detections and of multiple detections for the same object which failed to be suppressed by the NMS algorithm. Though DeformCaps producing slightly lower confidence scores might account for some of the reduction in false positives, we observe CenterNet consistently producing fairly confident false predictions while DeformCaps does not produce a detection in the same region at all (see qualitative examples in Appendix A.3). A quantitative analysis of this is provided in Table 2. These number are generted using the official MS COCO evaluation code in it’s standard operation. However, instead of only returning the average precision (AP) ratio of true positives (TP) and false positives (FP), namely TP/(TP +FP ), we also return the raw FP count as well.
In the default operation of CenterNet, there is no non-maximum suppression (NMS) operation performed. Instead, a sort of pseudo-NMS is performed by passing a 3× 3 Max Pooling operation over the detection grid activations to extract objects’ centers. When running CenterNet with multiscale testing, a NMS operation is then added to effectively choose which scale is the best fit for a given object detection. Therefore, the false positives being seen in Appendix A.3 are a direct results of multiple object centers being predicted incorrectly for the same object or object centers being predicted where there are no objects. We find that DeformCaps, which predicts objects as capsule vectors and not scalar, does not suffer from these former class of FPs.
This observation of less false positive detections is consistent with what we would expect from a capsule network with dynamic routing as compared to a traditional convolutional neural network (CNN). Where a CNN passes on all activations to the next layer, capsule networks utilize a dynamic routing algorithm to only pass on activations if there is agreement amongst the child capsule projections. In our proposed method specifically, with a SplitCaps structure and SE-Routing, the agreement is computed for projections of both the pose and class of the object being represented. It follows naturally that this would limit the amount of false positive detections which are produced, by reducing the amount of activations that get passed on. Further, we find from a survey of these qualitative examples that DeformCaps is better able to detect objects when being presented in an unusual pose or from an usual viewpoint than its CenterNet counterpart. This gives empirical support to one of the purported benefits of capsule networks, to be able to better generalize to unseen poses and viewpoints.
A.3 QUALITATIVE RESULTS ON MS COCO
Included in this section are a number of qualitative examples for CenterNet (Zhou et al., 2019a) and the proposed DeformCaps on the MS COCO test-dev dataset (Lin et al., 2014) using both flip and multi-scale augmentations. Results for CenterNet were obtained using the official code and trained models provided by the authors (Zhou et al., 2019a). While we do not make any sweeping claims, we wish to comment on a few general patterns that seemed to emerge in these examples. In Figure 3, we show a prototypical example of the general trend we are describing. In Figures 4– 5, we include more examples of this trend. In Figures 6– 7, we include a set of interesting examples of unusual object viewpoints or poses being better captured by DeformCaps than by CenterNet. | 1. What is the main contribution of the paper regarding capsule networks for object detection?
2. What are the strengths and weaknesses of the proposed framework compared to baseline and SOTA solutions?
3. How does the reviewer assess the analysis in Appendix A.2, particularly regarding CenterNet and NMS?
4. Why does the reviewer find the writing and approach section unclear?
5. What questions does the reviewer have regarding the computation of FPS numbers in Table 1?
6. What is the purpose of using KL in Section 3.2, and how might its asymmetry affect the results? | Review | Review
summary
This paper introduces capsule network for object detection. To solve the issue of capsule network when applied to large-scale detection problems, this paper develops deformable capsules, a new prediction head SplitCaps, and a dynamic routing algorithm, SE-Routing. Experiments are conducted on COCO where it performs slightly worse than the baselines but arguably predicts less false positives.
pros
I appreciate the spirit of developing new architectures for a highly-competitive task like object detection. It's totally acceptable that the performance cannot fully match the SOTA results in this kind of scenario. This paper is insightly as it studies several important issues of capsule network when being applied to large-scale visual tasks. The proposed solutions have at least make the framework run-able on standard hardware.
cons
My current rating is not purely based on the performance, but it is one of the major concerns here. The proposed framework (1) performs worse than baseline -- CenterNet, and (2) performs much worse then the SOTA solutions (Tab.1) in terms of both performance and speed. Note that the network weights are initialized from CenterNet.
The whole analysis in Appendix A.2 reads very confusing for several reasons:
CenterNet achieves higher AP (TP/(TP+FP)) and also high FP, which is a quite common thing to me. Ignoring the ratio and simply compare the absolute value of FP doesn't seems to be the correct thing to do.
Some visualizations are weird like Fig.4 human and dog. Why NMS cannot remove such highly-redundant predictions? It seems more like a bug to me.
On page 12 top row it writes "CenterNet consistently producing fairly confident false predictions (e.g. > 0.4)", how does this threshold 0.4 is chosen? It's not a standard thing and in fact if we improve it to 0.5 or higher, most of the FPs about CenterNet will disappear. Moreover, is this how FP gets counted? If yes, I think this is problematic as the total number (TP+FP) is changing across methods. A more fair way is to choose top-k (k=100 in coco) predictions and compute the metrics.
Lots of reference are missing. For example, Tab.1 compares multiple different methods but most of them are not cited. The related work only cites few very classical papers, but most recent works are missing.
The writing, especially the approach section, isn't very clear. I had a relatively hard time to understand how do each modules work, and what are the motivations behind these design choices.
Moreover, there are couple of new things proposed, and each one of them have specific design choices and parameters. However, there are no ablation studies properly studying them. As a results I don't understand which module is working, and how does it work.
questions
How do the FPS numbers in Tab.1 get computed? Are they simply borrowed from original paper or some published work? Some of them look strange to me.
Why KL in Sec.3.2? Does the asymmetric natural of KL influence the results? |
ICLR | Title
Deformable Capsules for Object Detection
Abstract
Capsule networks promise significant benefits over convolutional networks by storing stronger internal representations, and routing information based on the agreement between intermediate representations’ projections. Despite this, their success has been mostly limited to small-scale classification datasets due to their computationally expensive nature. Recent studies have partially overcome this burden by locally-constraining the dynamic routing of features with convolutional capsules. Though memory efficient, convolutional capsules impose geometric constraints which fundamentally limit the ability of capsules to model the pose/deformation of objects. Further, they do not address the bigger memory concern of class-capsules scaling-up to bigger tasks such as detection or large-scale classification. In this study, we introduce deformable capsules (DeformCaps), a new capsule structure (SplitCaps), and a novel dynamic routing algorithm (SE-Routing) to balance computational efficiency with the need for modeling a large number of objects and classes. We demonstrate that the proposed methods allow capsules to efficiently scale-up to large-scale computer vision tasks for the first time, and create the first-ever capsule network for object detection in the literature. Our proposed architecture is a one-stage detection framework and obtains results on MS COCO which are on-par with state-of-the-art one-stage CNN-based methods, while producing fewer false positive detections.
1 INTRODUCTION
Capsule networks promise many potential benefits over convolutional neural networks (CNNs). These include practical benefits, such as requiring less data for training or better handling unbalanced class distributions (Jiménez-Sánchez et al., 2018), and important theoretical benefits, such as buildingin stronger internal representations of objects (Punjabi et al., 2020), and modeling the agreement between those intermediate representations which combine to form final object representations (e.g. part-whole relationships) (Kosiorek et al., 2019; Sabour et al., 2017). Although these benefits might not be seen in the performance metrics (e.g. average precision) on standard benchmark computer vision datasets, they are important for real-world applications. As an example, it was found by Alcorn et al. (2019) that CNNs fail to recognize 97% of their pose space, while capsule networks have been shown to be far more robust to pose variations of objects (Hinton et al., 2018); further, real-world datasets are not often as extensive and cleanly distributed as ImageNet or MS COCO.
These benefits are achieved in capsule networks by storing richer vector (or matrix) representations of features, rather than the simple scalars of CNNs, and dynamically choosing how to route that information through the network. The instantiation parameters for a feature are stored in these capsule vectors and contain information (e.g. pose, deformation, hue, texture) useful for constructing the object being modeled. Early studies have shown strong evidence that these vectors do in fact capture important local and global variations across objects’ feature components (or parts) within a class (Punjabi et al., 2020; Sabour et al., 2017). Inside their networks, capsules dynamically route their information, seeking to maximize the agreement between these vector feature representations and the higher-level feature vectors they are attempting to form.
Despite their potential benefits, many have remained unconvinced about the general applicability of capsule networks to large-scale computer vision tasks. To date, no capsule-based study has achieved classification performance comparable to a CNN on datasets such as ImageNet, instead relegated to smaller datasets such as MNIST or CIFAR. Worse still, to the best of our knowledge, no capsule network has shown successful results in object detection, a very important problem in computer
vision, robotics, and medical imaging. Now, the argument can be made that standard benchmark datasets such as ImageNet or MS COCO likely contain majority of objects in that 3% range of usual poses, and thus CNNs will appear to perform extremely well when measured in terms of accuracy, stripping capsule networks of one of their largest advantages. However, until capsule networks can perform on-par with CNNs on these typical object poses, few will care about the benefits of stronger internal representations and better generalization to unseen poses.
Summary of Our Contributions: (1) We propose the first ever capsule-based object detection framework in the literature. Our network is a one-stage (single-shot) architecture, where objects are both localized and classified using capsules, and can perform on-par with the state-of-the-art CNNs on a large-scale dataset (MS COCO). (2) We address the geometric constraint of convolutional capsules (and locally-constrained routing) by introducing deformable capsules, where parent capsules learn to adaptively sample child capsules, effectively eliminating rigid spatial restrictions while remaining memory efficient. (3) We design a new capsule-based prediction head structure, SplitCaps, which reformulates the projections of an objects’ instantiation parameters, presence, and class, eliminating the previous dimensional increase of capsules by the number of classes. This crucial addition enables the training of capsule networks on large-scale computer vision datasets for the first time in the literature. (4) To route information across SplitCaps’ unique structure, we introduce a novel Squeezeand-Excitation inspired dynamic routing algorithm, SE-Routing, which seeks to maximize agreement between child capsule projections, without the need of iterative loops
2 DEFORMABLE CAPSULES: FIXING LOCALLY-CONSTRAINED DYNAMIC ROUTING
The capsule network architecture proposed by Sabour et al. (2017) acted on global information, where digit capsules represented the pose and presence of digits in an image regardless of spatial location. The information from all children in the previous layer was sent to every parent in the following layer, weighted via the routing coefficients found in a cosine similarity routing algorithm. While this proved to be a highly-effective strategy, it was also computationally expensive, limiting its use to only small-scale datasets. Recent works attempted to scale up capsule networks to larger problems such as biomedical image segmentation (LaLonde & Bagci, 2018) or action detection in video (Duarte et al., 2018) by using convolutional capsules and locally-constraining the routing algorithm. Although efficient solutions were presented in those studies, the representation power of capsule networks was fundamentally limited due to imposing local constraints. This is because convolutions, by design, have a fixed geometric structure (Dai et al., 2017), and such a geometric constraint significantly inhibits capsules’ ability to model part-whole relationships, relying on parts of objects to fall within a fixed local grid. Therefore, it is unreasonable to expect a capsule to effectively represent the pose and deformations of an object when the information related to the parts of that object are locked into a fixed spatial relationship.
In this study, we propose to effectively solve this aforementioned problem by introducing a method that balances efficiency with the ability for a capsule to represent any pose and deformation of an object (i.e. where child capsules can be found in different spatial relationships to one another for the same parent). In such a formulation, global information is not explicitly required, but it does require parent capsules to have more flexibility over which child capsules they draw information from. Our proposed solution is deformable capsules. The idea behind the proposed algorithm is simple: if parent capsules are supposed to capture common deformations of the objects they represent within their vectors, then the choice of which children to aggregate information from must be handled in a deformable manner as well. Deformable capsules allow parents to adaptively gather projections from a non-spatially-fixed set of children, and thus effectively and efficiently model objects’ poses.
To achieve this overall goal, we follow the same efficient convolutional capsule paradigm, where projection vectors are formed via a convolution operation with a kernel centered on the parent capsules’ spatial location, but now we learn an additional set of weights for each parent capsule. These learnable weights are the same shape as each parent’s kernel, and represent the offset values for the spatial sampling of child capsules for that parent. Based on the child capsule representation vectors in the previous layer, these weights learn which children a parent capsule should adaptively sample from for a given input image. Dynamic routing then determines how to weight the information coming from each of these children based on their agreement for each projected parent.
Let us give a concrete example to better illustrate the parameter savings of this technique. Given a H = 128 by W = 128 grid of child capsules with ci = 32 capsule types of ai = 8 atoms each, being routed to a set of cj = 10 parent capsule types of aj = 16 atoms each, the fully-connected capsules of Sabour et al. (2017) would requireH×W×ci×ai×cj×aj ⇒ 128×128×32×8×10×16 ≈ 671M parameters for this layer alone (assuming the goal is classification, with detection requiring a multiplicative increase by the detection grid size). Instead, using our proposed deformable capsules with a k2 = 52 kernel, we only require 2× k× k× ai × cj × aj ⇒ 2× 5× 5× 8× 10× 16 ≈ 64K parameters. Convolutional capsules with locally-constrained routing require 32K parameters, not needing the additional spatial offsets kernel, but as mentioned above, they are fundamentally limited in the poses and deformations that they can represent. In our experiments, we found deformable capsules to converge faster and to much higher performance than convolutional capsules.
3 OBJECTS AS CAPSULES: SplitCaps WITH SE-Routing
We propose a novel one-stage (single-shot) capsule network architecture for object detection, called DeformCaps, where objects are detected, classified, and modeled with capsules. Our overall network architecture, shown in Fig. 1, is built upon CenterNet by Zhou et al. (2019a) who proposed to represent objects in images as scalar point values located at the center of their bounding boxes. The authors then regress the remaining characteristics of the object (e.g. height, width, depth) for each center-point detected. In our work, we follow the same center-point detection paradigm, but represent our objects with capsule vectors instead. Since several recent studies have found utilizing a CNN backbone before forming capsule types to be beneficial to overall performance (Duarte et al., 2018; Kosiorek et al., 2019; Tsai et al., 2020), we adopt the preferred backbone of CenterNet, DLA-34 (Yu et al., 2018), as this gave the best trade-off between speed and accuracy. Features extracted from the backbone are sent to our capsule object detection head and a bounding box regression head. In order to perform object detection using capsules, we introduce a new capsule structure, called SplitCaps, composed of class-agnostic and class-presence capsules, and a novel routing algorithm, called SE-Routing.
3.1 SPLITCAPS: CLASS-AGNOSTIC CAPSULES AND CLASS-PRESENCE CAPSULES
As discussed in Section 2, the original CapsNet by Sabour et al. (2017) was extremely expensive in computation and our proposed deformable capsules is a best possible solution to balance non-rigid deformations of objects while remaining memory efficient. However, there is a more significant memory hurdle capsule networks must overcome when scaling up to large-scale datasets, such as MS COCO. In their current implementation, capsule networks represent each class with its own parent
capsule vector. On small scale classification datasets (and using deformable routing), this is not an issue; it amounts to 2× k× k× ai × cj × aj parameters and N × ci × cj × aj × 4 bytes to store the intermediate representations to be routed for the parents, where cj is usually around 10 classes to represent. Let us suppose we have 5× 5 kernels with 32 input capsule types of 8 atoms per capsule, 10 output capsule types of 16 atoms per capsule, and a batch size of 32. In total, we would have 2× 5× 5× 8× 10× 16 = 64K parameters and 32× 32× 10× 16× 4 ≈ 655 KB. When we scale this up to object detection, we now need to store representations for every possible object location, and for MS COCO we need to represent 80 possible classes. This gives us 2× k × k×ai× cj ×aj ⇒ 2×5×5×8×80×16 = 512K parameters and N ×H×W × ci× cj ×aj ×4 bytes⇒ 32× 128× 128× 32× 80× 16× 4 ≈ 86 GB for the intermediate representations, where we assume the output grid of detections is 128 × 128 with a single detection (i.e. bounding box) predicted per class per location. The problem is not any better for large-scale classification datasets such as ImageNet either, where we lose the grid of predictions but grow to 1000 classes, which would require 2× 5× 5× 8× 1000× 16 = 6.4M parameters and 32× 32× 1000× 16× 4 ≈ 66 GB for the intermediate representations. Clearly, with most GPU memories limited to 12–24 GB, a solution is needed to be found for capsule networks to scale up to larger-scale computer vision tasks.
To overcome this issue, we propose a new type of capsule architecture, SplitCaps, to more efficiently scale capsule networks to large-scale computer vision tasks. SplitCaps contains two parent capsule types, each with a different number of atoms per capsule, for each location of the detection grid. As before, the idea is to balance efficiency with the ability to learn powerful representations. Towards this goal, SplitCaps proposes to divide up between its two parent capsules the tasks of (i) learning the instantiation parameters necessary to model the possible variations of an object and (ii) predicting which classes of objects are present in a given input. The first capsule type we refer is our classagnostic object instantiation capsules, and the second we refer is class presence capsules.
Class-agnostic object instantiation capsules: The purpose of these capsules is similar to those in previous works: model the possible variations (in pose, deformation, texture, etc.) of objects within a vector of instantiation parameters, the span of which should cover all possible variations for that object at test (hence why capsules are better than CNNs at generalizing to unseen poses). While previous capsule networks did this in a class-wise manner, we argue such a formulation is not required and possibly it is redundant. Many variations (e.g. rotation, skew, stroke thickness) may be class-independent, and thus to model these variations class-wise would require repetition across each capsule type. Instead, we propose to model all classes within a single capsule type (i.e. class-agnostic). In this way, while class-dependent variations would each require their own dimensions of the vector, class-independent variations can each be modeled along a single dimension for all possible objects. Since it is reasonable to assume there will be at least some class-specific variations, we increase the default capsule vector dimension from 16 to 64 to accommodate for possible class-specific instantiation parameters.
During training, if an object is present at a given spatial location, the 64-dimensional capsule vector for that location is fed to a reconstruction regularization sub-network to construct the mask of that object as similar to the reconstruction regularization used by Sabour et al. (2017) for classification. This sub-network is a relatively small and fast addition: a set of three ReLU-activated 1× 1 convolutional layers with 256 filters each, followed by a final sigmoid-activated 1 × 1 convolution with N = n2 = 282 = 784 filters, before reshaping outputs to n× n. Since objects’ scales vary dramatically, we scale normalize all objects’ ground-truth masks to be 28 × 28 (by following He et al. (2017)). Supervised training is conducted by computing the Dice loss (Milletari et al., 2016) between the predicted reconstruction, r, and the object’s mask, m
Lr = 2 ∑N i rimi∑N
i r 2 i + ∑N i m 2 i , (1)
where Lr is used to provide a regularization signal to the instantiation parameters being learned. Class presence capsules: The class presence capsules attempt to model which classes of objects are present in the input at each spatial location, if any. We accomplish this by setting the atoms per capsule to the number of classes being represented (i.e. 80 for MS COCO). Just as Hinton et al. (2018) separately modeled pose (with a matrix) and activation (with a scalar), this 80-dimensional vector can be viewed as a class-dependent set of activation values. The activation values are then passed through a sigmoid function and thresholded; if one or more activation values are above the
threshold, an object is determined to be at that spatial location with the strongest activated dimension determining the class label.
In order to produce a smooth loss function during training, we create a ground-truth heatmap by fitting a Gaussian distribution rather than a single point, to the center-point of each object’s bounding box, with variance proportional to the size of the box following Zhou et al. (2019a). More specifically, we create a heatmap H ∈ [0, 1]Xd ×Yd ×K containing each down-scaled ground truth center-point p̃ = ( px d , py d ) for class k ∈ K using a Gaussian kernelHxyk = exp ( − (x−p̃x) 2+(y−p̃y)2 2σ2p ) , where d is the amount of downsampling in the network and σp is an object-size-adaptive standard deviation (Law & Deng, 2018). In the case of overlapping Gaussians, we take the element-wise maximum. To handle the large class imbalance between objects and background in our heatmaps, we use a penalty-reduced pixel-wise logistic regression with a focal loss (Lin et al., 2017):
Lh = −1 P ∑ xyk
{ (1− Ĥxyk)α log(Ĥxyk) if Hxyk = 1,
(1−Hxyk)β(Ĥxyk)α log(1− Ĥxyk), otherwise; (2)
where α, β are hyper-parameters of the focal loss, P is the number of center-points in the input, used to normalize all positive focal loss instances to 1 (Zhou et al., 2019a). We use α = 2 and β = 4 in all our experiments, following Law & Deng (2018). At test, to efficiently retrieve the object’s exact center, we run a 3× 3 max-pooling over the thresholded spatial map. To predict the height and width of the bounding boxes of objects and recover the x, y offsets needed to map back to the upscaled image dimensions, we follow the same formulation as Zhou et al. (2019a) and pass the backbone features through a 3× 3 convolutional layer with 256 feature maps, then a 1× 1 convolutional layer with 2 feature maps. These layers predict the local offset, Ô ∈ RWd ×Hd ×2, and size prediction, Ŝ ∈ RWd ×Hd ×2, for each center-point and are supervised by
Lo = 1
P ∑ p ∣∣∣Ôp̃ − (p d − p̃ )∣∣∣ and Ls = 1 P ∑ p ∣∣∣Ŝp − (x2 − x1, y2 − y1)∣∣∣ , (3) respectively. Our final objective function is thus defined as L = Lh + λrLr + λsLs + λoLo. We keep λs = 0.1 and λo = 1 as done in Zhou et al. (2019a), and set λr = 0.1 initially, then step up to λr = 2.0 at the half-way in training.
3.2 SE-ROUTING: SPLITCAPS NEW ROUTING ALGORITHM
As a core component of capsule networks, dynamic routing seeks to maximize the agreement between child capsule projections for parent capsules and to fully-leverage the richer representations being stored. Since SplitCaps introduces a unique capsule head structure, where instantiation parameters and activations are split across different capsule types, previous dynamic routing algorithms can no longer be directly applied. To overcome this, we propose a new dynamic routing algorithm that takes inspiration from Squeeze-and-Excitation networks (Hu et al., 2018), which we call SE-Routing, and illustrated in Fig. 2.
Previously proposed dynamic routing algorithms (e.g. Sabour et al. (2017) and Hinton et al. (2018)) were typically iterative, requiring a hand-tuned loop of routing iterations, which proved to be slow and temperamental in practice. Different studies found different numbers of iterations to be effective, and one meta-study of five different iterative dynamic routing algorithms found them all to be largely ineffective (Paik et al., 2019). To avoid this pitfall, we propose a new routing algorithm to dynamically assign weights to child capsule projections based on their agreement, computed in a single forward pass using a simple gating mechanism with sigmoid activation. Unlike Sabour et al. (2017) which uses a routing softmax to force a one-hot mapping of information from each child to parents, our proposed SE-Routing learns a non-mutually-exclusive relationship between children and parents to allow multiple children to be emphasised for each parent.
Creating child capsule projection descriptors (squeeze): Following the Squeeze-and-Excitation paradigm, we first must compute the squeeze (i.e. a set of descriptors which summarize relevant information about each feature) to create a set of child capsule projection descriptors. In Hu et al. (2018), the authors proposed to use the global average activation of each channel with the goal of modeling channel interdependencies. In this study, our goal is to maximize the agreement
between child projections, for both the instantiation parameters and class presence of the object being modeled. With that motivation, we compute three separate descriptors which are fed into the excitation phase of the routing: (i) cosine angle between the mean projection vector and each child’s projection, which captures object instantiation agreement; (ii) Kullback–Leibler (KL) divergence of each child’s predicted class distribution and an aggregated distribution, which captures class presence agreement; and (iii) variance of each child’s predicted class distribution, which captures class presence uncertainty.
The cosine angle descriptor, a, is calculated in a similar manner to Sabour et al. (2017). A mean projection vector, ũ = 1/N ∑N i ûi, is first computed using the set of child capsule projections, Û = {û1, û2, ..., ûN}. Then we compute a set of cosine angles between each individual projection and this mean, a = {a1, a2, ..., aN}, where ai = (ũ · ûi)/(|ũ| · |ûi|). In a similar fashion, we compute a KL divergence descriptor, b, by first creating an aggregate object class distribution. To create this aggregate distribution, we follow the work of Clemen & Winkler (1999), insofar as each child capsule type is treated as an expert giving its prediction about the true underlying class distribution. First, we compute a simple linear opinion pool (Stone, 1961), p(z̃) =∑N i σs(zi)/N , where p(z̃) is the aggregated probability distribution, Z = {z1, z2, ...,zN} is the
set of child class presence projection vectors, and σs(zi)j = ezij/ ∑K k e
zik for j = {1, ...,K}, i = {1, ..., N} is the softmax function used to transform projection vectors into normalized probability distributions over the K classes. Then, we measure the agreement between each child’s predicted distributions, σs(zi), and the aggregate distribution, p(z̃), as the KL divergence between them bi = ∑K k p(z̃k) log(p(z̃k)/σs(zi)k).
Lastly, we take our child capsules’ predicted distributions, σs(zi), and compute their variance to estimate the uncertainty each child has: ci = ∑K k (σs(zi)k − ∑K k σs(zi)k)
2. Our three sets of descriptors are efficiently computed for all capsules simultaneously (i.e. for entire batch and across spatial locations) on GPU in parallel with matrix operations. They are then concatenated, s = a⊕ b⊕ c, and fed to the excitation layers of our routing mechanism (Fig. 2). Determining routing coefficients (excitation): The excitation stage of the SE-Routing algorithm has the task of learning a mapping from the concatenated set of capsule descriptors, s, into a set
of routing coefficients for the child capsule projections. Since parents capsules types are no longer different classes in our formulation, but rather two separated aspects of modeling objects, we compute a single set of routing coefficients at each spatial location for both parents. Formally, this mapping is computed as r = σ(W2δ(W1s)), where W1 ∈ R 3N t ×3N , W2 ∈ RN× 3N t , δ is the ReLU activation function, σ is the sigmoid activation function, and t is the reduction ratio used to form this mapping into a two fully-connected (FC) layer bottleneck. A brief note: although excitation routing has interesting parallels to self-attention (e.g. dynamically conditioned on the input), our learned mapping is non-mutually-exclusive, while self-attention and CapsNet’s dynamic routing both rely on applying a softmax function over outputs.
Finally, with the determined routing coefficients r = {r1, r2, ..., rN}, we can compute the output of the SplitCaps detection head. Projection vectors from each child to each parent are computed using the proposed deformable capsules (as described in Section 2). These projections are then combined and weighted by the routing coefficients to form the final parent capsules. These final parents contain the instantiation parameters, vobj = ∑N i riûobj|i, and class presence, vcls = ∑N i riûcls|i, of any objects being represented at the given spatial location within the detection grid.
4 DEFORMABLE CAPSULES ON MS COCO
We evaluated our deformable capsule object detection framework on the MS COCO dataset (Lin et al., 2014), which contains 118K training, 5K validation and 20K hold-out testing images. Average precision (AP) is reported over all IOU thresholds and at thresholds 0.5 (AP50) and 0.75 (AP75). We followed the training procedure proposed in Zhou et al. (2019a), training on 512 × 512 pixel inputs, yielding 128× 128 detection grids, using random flip, random scaling (between 0.6 to 1.3), cropping, and color jittering as data augmentation, and Adam (Kingma & Ba, 2014) to optimize our objective function. Due to limited compute resources, we initialized the backbone network weights from CenterNet and only train for 40 epochs with a batch size of 12 and learning rate of 5e-4 with 5× drops at 5, 15, and 25 epochs. Longer training would likely yield superior results, as found by Zhou et al. (2019a) who obtained better results for CenterNet when increasing from 140 to 230 epochs.
In Table 1, we provide results of our proposed deformable capsule network with and without flip and multi-scale augmentations following Zhou et al. (2019a). Inference time on our hardware (Intel Xeon E5-2687 CPU, Titan V GPU, Pytorch 1.2.0, CUDA 10.0, and CUDNN 7.6.5) was consistent with those reported by Zhou et al. (2019a).1 While DeformCaps performs slightly worse than CenterNet in terms of AP, it does so while producing far fewer false positive detections, as shown in Table 2 in Appendix A.2. For ablations, we trained a version of DeformCaps which replaces the proposed deformable capsules with the standard locally-constrained convolutional capsules (nonDeformCaps), and a version which removed the routing procedures (No-Routing). These ablations show the contribution of each component of the proposed method.
1We will make our code publicly available for the community for reproducible research.
5 RELATED WORKS
5.1 CAPSULE NETWORKS
The idea of capsules was first introduced by Hinton et al. (2011). Sabour et al. (2017) extended this and proposed dynamic routing between capsules. The EM routing algorithm was then modified by Hinton et al. (2018). Recently, capsule networks could achieve the state-of-the-art performance for a wide range of applications: video object segmentation (Duarte et al., 2019), point cloud segmentation (Zhao et al., 2019), explainable medical diagnosis (LaLonde et al., 2020), text classification (Zhao et al., 2018), sentiment analysis (Wang et al., 2018), and various other applications (Vijayakumar, 2019).
5.2 OBJECT DETECTION
Region proposal-based approaches: R-CNN was one of the first successful deep object detectors, in which a selective search algorithm was used to select a number of region proposals, CNN features were extracted from each of the region proposals and were used to both classify the object and regress its bounding box (Girshick et al., 2014). The later addition of Fast R-CNN (Girshick, 2015) provided end-to-end training and addressed the speed and efficiency issues of R-CNN.
Anchors-based approaches: Anchors-based approaches sample fixed-shape bounding boxes (anchors) around a low-resolution image grid, then attempt to classify anchors into object classes. Faster R-CNN (Ren et al., 2015) generates region proposals in a first stage network, then attempts to classify and regress bounding boxes for the top-k highest scoring anchors in a second stage network. Later studies such as Redmon et al. (2016) dramatically speed up the process by converting the proposal classifier to a multi-class one-stage detector. Since then, researchers have been working on improving one-stage detectors by including shape priors (Redmon & Farhadi, 2017; 2018), multiple feature resolutions (Liu et al., 2016), re-weighting the loss among different samples (Lin et al., 2017), or modeling channel-wise attention (Chen et al., 2020).
Keypoint estimation-based approaches: CornerNet (Law & Deng, 2018) attempts to detect objects by predicting two bounding box corners as keypoints. ExtremeNet (Zhou et al., 2019b) extends CornerNet’s approach by estimating all corners and the center of the objects’ bounding box. However, these methods rely on significantly slow combinatorial grouping post-processing stage. Zhou et al. (2019a) proposed CenterNet which attempts to predict only an objects’ center point, and regress all other necessary values from there without the need for grouping or post-processing.
6 DISCUSSIONS, LIMITATIONS, & FUTURE WORK
Our proposed deformable capsules (DeformCaps) with SplitCaps object-class representations and Squeeze-and-Excitation inspired SE-Routing algorithm represents an important step for capsule networks to scale-up to large-scale computer vision problems, such as object detection or large-scale classification. Our proposed one-stage object detection capsule network is able to obtain results on MS COCO which are on-par with other state-of-the-art one-stage CNN-based networks for the first time in the literature, while also producing fewer false positives. Examining the qualitative results, provided in Appendix A.3, lends empirical evidence that DeformCaps can better generalize to unusual poses/viewpoints of objects than CenterNet (Zhou et al., 2019a). We hope our work will inspire future research into the considerable potential of capsule networks.
Limitations: Our study contains some limitations, discussed in greater detail in Appendix A.1. Briefly, (1) we had difficulty integrating the bounding box regression values into our capsule object detection head; (2) the choice of descriptors used in the squeeze is somewhat handcrafted, and is open to further investigation; (3) the choice of dimensions to model the class-agnostic instantiation parameters of objects was chosen semi-arbitrarily and could likely improve from fine-search; and (4) the choice of reconstructing objects’ masks versus image patches is not thoroughly explored.
Future directions: The reconstruction sub-network of DeformCaps could possibly be trained to produce a fast single-shot instance segmentation framework. At test, potentially detected objects could have their instantiation vectors reconstructed into objects’ masks, then these masks would simply be resized to the predicted bounding-boxes, similar to He et al. (2017) but without needing to have the initial reshape and ROI alignment required in their two-stage approach.
A APPENDIX
A.1 EXTENDED EXPLANATIONS OF LIMITATIONS AND POSSIBLE RECOMMENDATIONS
In the discussion section of the main body of our paper, we mentioned four potential limitations in our study. We would like to discuss these in a bit more detail here. Since so many components of our method are newly introduced, there is a wide range of choices which could be investigated and improved by future researchers and engineers, and we suggest a few of those here:
(1) We had difficulty in integrating the bounding box regression values into our capsule object detection head. In our implementation, the class-agnostic capsules are trained to predict scalenormalized masks of 28× 28. Ultimately, we would like to integrate predicting the object masks and the boxes for those masks together, as these tasks surely share mutual information. However, to the best of our knowledge, no published works exist for using capsules on a real-valued regression task.
(2) For our proposed SE-Routing, as with the original Squeeze-and-Excitation network, the choice of descriptors computed in the squeeze is somewhat handcrafted. We propose to use the cosine angle, KL divergence, and variance, and provide justifications for each of these choices, then allow the excitation to learn which of these pieces of information is most beneficial dynamically for each given input. Nonetheless, it is completely plausible that different descriptors could yield superior results. We unfortunately do not have the compute resources to run ablation studies over each of these chosen descriptors individually.
(3) The choice of 64 dimensions to model the class-agnostic instantiation parameters was decided somewhat empirically. As we argued in the main paper, it is unlikely that all variations across object poses are completely class independent; thus, to represent these extra dimensions of variation, we increase our vector lengths considerably (16→ 64). However, it is possible that the number of classindependent and class-dependent variations is significantly higher or lower than the value chosen, and largely will depend on the complexity of the data being modeled. This difficulty is analogous to determining the optimal number of convolutional filters to use at every given layer of a CNN. Related to this, there is the potential for the class-dependent dimensions of the instantiation vectors to have unwanted influence over the cosine angle descriptors when attempting to represent objects of other classes. It could be beneficial to pass class information from the class presence capsule type over to the object instantiation capsule type to dynamically attend to the relevant dimensions of its vector for a given object. In a similar manner, it could be beneficial when computing the probability aggregation using the linear opinion pool to weight the expert opinions in proportion to their uncertainty instead of uniformly.
(4) We chose to reconstruct object’s masks with the motivation of forcing the network to learn variations in shape, pose, and deformations. Since CNNs are known to be biased to texture information
over shape, we chose not to explicitly supervise the learning of any texture information. Nonetheless, it is plausible that reconstructing the object with texture could yield superior performance. Further, we chose to set the value of the reconstruction regularization’s contribution to the loss to 0.1, following what was found most beneficial by CenterNet (Zhou et al., 2019a) for weighting the size loss contribution, and from a concern to not over-regularize the network early in training, then stepped this value to 2.0 half-way through training to make its value roughly equal to the other loss terms. From our experience, the accuracy remained fairly consistent across values up to 2.0 for this term, while setting its weight to 0.0 resulted in a degradation of performance. We found that increasing the value during training led to faster improvements in performance, consistent with other works in the literature that use such a regularization term. Engineering efforts on this parameter, such as a temperature function to automatically increase this weight during training, may prove beneficial if the goal is to reach the maximum possible accuracy.
A.2 ANALYSIS OF FALSE POSITIVES
DeformCaps tends to be more conservative with its detections than CenterNet. This can be observed both by the slightly lower confidence scores (typically 0.1 less than CenterNet for most detections), and by the overall fewer amount of boxes placed in scenes. CenterNet tends to produce far more false positives than DeformCaps, both in the case of incorrect detections and of multiple detections for the same object which failed to be suppressed by the NMS algorithm. Though DeformCaps producing slightly lower confidence scores might account for some of the reduction in false positives, we observe CenterNet consistently producing fairly confident false predictions while DeformCaps does not produce a detection in the same region at all (see qualitative examples in Appendix A.3). A quantitative analysis of this is provided in Table 2. These number are generted using the official MS COCO evaluation code in it’s standard operation. However, instead of only returning the average precision (AP) ratio of true positives (TP) and false positives (FP), namely TP/(TP +FP ), we also return the raw FP count as well.
In the default operation of CenterNet, there is no non-maximum suppression (NMS) operation performed. Instead, a sort of pseudo-NMS is performed by passing a 3× 3 Max Pooling operation over the detection grid activations to extract objects’ centers. When running CenterNet with multiscale testing, a NMS operation is then added to effectively choose which scale is the best fit for a given object detection. Therefore, the false positives being seen in Appendix A.3 are a direct results of multiple object centers being predicted incorrectly for the same object or object centers being predicted where there are no objects. We find that DeformCaps, which predicts objects as capsule vectors and not scalar, does not suffer from these former class of FPs.
This observation of less false positive detections is consistent with what we would expect from a capsule network with dynamic routing as compared to a traditional convolutional neural network (CNN). Where a CNN passes on all activations to the next layer, capsule networks utilize a dynamic routing algorithm to only pass on activations if there is agreement amongst the child capsule projections. In our proposed method specifically, with a SplitCaps structure and SE-Routing, the agreement is computed for projections of both the pose and class of the object being represented. It follows naturally that this would limit the amount of false positive detections which are produced, by reducing the amount of activations that get passed on. Further, we find from a survey of these qualitative examples that DeformCaps is better able to detect objects when being presented in an unusual pose or from an usual viewpoint than its CenterNet counterpart. This gives empirical support to one of the purported benefits of capsule networks, to be able to better generalize to unseen poses and viewpoints.
A.3 QUALITATIVE RESULTS ON MS COCO
Included in this section are a number of qualitative examples for CenterNet (Zhou et al., 2019a) and the proposed DeformCaps on the MS COCO test-dev dataset (Lin et al., 2014) using both flip and multi-scale augmentations. Results for CenterNet were obtained using the official code and trained models provided by the authors (Zhou et al., 2019a). While we do not make any sweeping claims, we wish to comment on a few general patterns that seemed to emerge in these examples. In Figure 3, we show a prototypical example of the general trend we are describing. In Figures 4– 5, we include more examples of this trend. In Figures 6– 7, we include a set of interesting examples of unusual object viewpoints or poses being better captured by DeformCaps than by CenterNet. | 1. What are the main contributions and novelties of the paper regarding capsule-based object detection?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of efficiency and experimental results?
3. How does the reviewer assess the significance and impact of the work in the context of capsule research and object detection tasks?
4. Are there any concerns or suggestions regarding the sufficiency of experiments and ablation studies in the paper?
5. How does the reviewer evaluate the clarity and effectiveness of the writing and figures in conveying the key components and ideas of the paper? | Review | Review
The paper proposes to use the capsules to perform object detection on COCO. Capsules, while showing promises, are usually too expensive for tasks beyond MNIST and Cifar. The authors propose three key improvements in DeformCaps, SplitCaps and SE-Routing to improve the efficiency and therefore allow capsules to be applied on larger tasks such as object detection. The authors claim novelties in:
-- First ever capsule-based object detection on COCO.
-- Deformable capsules that let parent to samples child capsules instead of having fixed connections to improve efficiency.
-- SplitCaps: a reparametrization trick to reduce the number of capsules needed.
-- Squeeze-and-excitation routing that finds agreement without iterative loop.
I agree with the authors' novelty claims.
Strengths:
-- Impressive results: Capsule Networks are well known to be computationally expensive. Getting it to work on large images is certainly no easy task. There has been other works operating on larger images, as noted by this paper, but they usually cannot achieve quality on par with CNNs. Having said that, I need to note that the SOTA for COCO has consistently improved, and is at ~51% AP [a], quite better than the numbers listed in Table 1.
-- Good novelty: DeformCapsule and SplitCaps both follow a straightforward pattern to dissect prohibitively large tensor predictions into smaller ones. And together with the SE-routing methods, the novelty should be sufficient for this venue.
-- Clear writing. Figure 1 and 2 helped a lot with the understanding of the key components as well.
[a]: SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization
Weaknesses:
-- Insufficient experimentation: There is only a single experimental table that contains both comparisons to the SOTA and ablations to DeformCaps and SE-Routing. It would be nice to see additional results, be it on another dataset / task (e.g. sem seg), or even additional ablations. For example, one premise of capsules was the requirement of less training data. Is it true here as well?
Conclusion:
I'd recommend acceptance because of the strong results and good novelty. The only major weakness is the lack of additional ablations to highlight the advantages (if not on AP) over CNNs. Nonetheless, I believe this work is a good step towards more interests and advances in capsules, and will be of interests to the audience of this venue.
Post-rebuttal: I concur with R1 in that the results look poor when taking into the account how close the proposed method is compared to CenterNet and that it was finetuned from a CenterNet. Therefore, I'll lower my rating to 6. |
ICLR | Title
Deformable Capsules for Object Detection
Abstract
Capsule networks promise significant benefits over convolutional networks by storing stronger internal representations, and routing information based on the agreement between intermediate representations’ projections. Despite this, their success has been mostly limited to small-scale classification datasets due to their computationally expensive nature. Recent studies have partially overcome this burden by locally-constraining the dynamic routing of features with convolutional capsules. Though memory efficient, convolutional capsules impose geometric constraints which fundamentally limit the ability of capsules to model the pose/deformation of objects. Further, they do not address the bigger memory concern of class-capsules scaling-up to bigger tasks such as detection or large-scale classification. In this study, we introduce deformable capsules (DeformCaps), a new capsule structure (SplitCaps), and a novel dynamic routing algorithm (SE-Routing) to balance computational efficiency with the need for modeling a large number of objects and classes. We demonstrate that the proposed methods allow capsules to efficiently scale-up to large-scale computer vision tasks for the first time, and create the first-ever capsule network for object detection in the literature. Our proposed architecture is a one-stage detection framework and obtains results on MS COCO which are on-par with state-of-the-art one-stage CNN-based methods, while producing fewer false positive detections.
1 INTRODUCTION
Capsule networks promise many potential benefits over convolutional neural networks (CNNs). These include practical benefits, such as requiring less data for training or better handling unbalanced class distributions (Jiménez-Sánchez et al., 2018), and important theoretical benefits, such as buildingin stronger internal representations of objects (Punjabi et al., 2020), and modeling the agreement between those intermediate representations which combine to form final object representations (e.g. part-whole relationships) (Kosiorek et al., 2019; Sabour et al., 2017). Although these benefits might not be seen in the performance metrics (e.g. average precision) on standard benchmark computer vision datasets, they are important for real-world applications. As an example, it was found by Alcorn et al. (2019) that CNNs fail to recognize 97% of their pose space, while capsule networks have been shown to be far more robust to pose variations of objects (Hinton et al., 2018); further, real-world datasets are not often as extensive and cleanly distributed as ImageNet or MS COCO.
These benefits are achieved in capsule networks by storing richer vector (or matrix) representations of features, rather than the simple scalars of CNNs, and dynamically choosing how to route that information through the network. The instantiation parameters for a feature are stored in these capsule vectors and contain information (e.g. pose, deformation, hue, texture) useful for constructing the object being modeled. Early studies have shown strong evidence that these vectors do in fact capture important local and global variations across objects’ feature components (or parts) within a class (Punjabi et al., 2020; Sabour et al., 2017). Inside their networks, capsules dynamically route their information, seeking to maximize the agreement between these vector feature representations and the higher-level feature vectors they are attempting to form.
Despite their potential benefits, many have remained unconvinced about the general applicability of capsule networks to large-scale computer vision tasks. To date, no capsule-based study has achieved classification performance comparable to a CNN on datasets such as ImageNet, instead relegated to smaller datasets such as MNIST or CIFAR. Worse still, to the best of our knowledge, no capsule network has shown successful results in object detection, a very important problem in computer
vision, robotics, and medical imaging. Now, the argument can be made that standard benchmark datasets such as ImageNet or MS COCO likely contain majority of objects in that 3% range of usual poses, and thus CNNs will appear to perform extremely well when measured in terms of accuracy, stripping capsule networks of one of their largest advantages. However, until capsule networks can perform on-par with CNNs on these typical object poses, few will care about the benefits of stronger internal representations and better generalization to unseen poses.
Summary of Our Contributions: (1) We propose the first ever capsule-based object detection framework in the literature. Our network is a one-stage (single-shot) architecture, where objects are both localized and classified using capsules, and can perform on-par with the state-of-the-art CNNs on a large-scale dataset (MS COCO). (2) We address the geometric constraint of convolutional capsules (and locally-constrained routing) by introducing deformable capsules, where parent capsules learn to adaptively sample child capsules, effectively eliminating rigid spatial restrictions while remaining memory efficient. (3) We design a new capsule-based prediction head structure, SplitCaps, which reformulates the projections of an objects’ instantiation parameters, presence, and class, eliminating the previous dimensional increase of capsules by the number of classes. This crucial addition enables the training of capsule networks on large-scale computer vision datasets for the first time in the literature. (4) To route information across SplitCaps’ unique structure, we introduce a novel Squeezeand-Excitation inspired dynamic routing algorithm, SE-Routing, which seeks to maximize agreement between child capsule projections, without the need of iterative loops
2 DEFORMABLE CAPSULES: FIXING LOCALLY-CONSTRAINED DYNAMIC ROUTING
The capsule network architecture proposed by Sabour et al. (2017) acted on global information, where digit capsules represented the pose and presence of digits in an image regardless of spatial location. The information from all children in the previous layer was sent to every parent in the following layer, weighted via the routing coefficients found in a cosine similarity routing algorithm. While this proved to be a highly-effective strategy, it was also computationally expensive, limiting its use to only small-scale datasets. Recent works attempted to scale up capsule networks to larger problems such as biomedical image segmentation (LaLonde & Bagci, 2018) or action detection in video (Duarte et al., 2018) by using convolutional capsules and locally-constraining the routing algorithm. Although efficient solutions were presented in those studies, the representation power of capsule networks was fundamentally limited due to imposing local constraints. This is because convolutions, by design, have a fixed geometric structure (Dai et al., 2017), and such a geometric constraint significantly inhibits capsules’ ability to model part-whole relationships, relying on parts of objects to fall within a fixed local grid. Therefore, it is unreasonable to expect a capsule to effectively represent the pose and deformations of an object when the information related to the parts of that object are locked into a fixed spatial relationship.
In this study, we propose to effectively solve this aforementioned problem by introducing a method that balances efficiency with the ability for a capsule to represent any pose and deformation of an object (i.e. where child capsules can be found in different spatial relationships to one another for the same parent). In such a formulation, global information is not explicitly required, but it does require parent capsules to have more flexibility over which child capsules they draw information from. Our proposed solution is deformable capsules. The idea behind the proposed algorithm is simple: if parent capsules are supposed to capture common deformations of the objects they represent within their vectors, then the choice of which children to aggregate information from must be handled in a deformable manner as well. Deformable capsules allow parents to adaptively gather projections from a non-spatially-fixed set of children, and thus effectively and efficiently model objects’ poses.
To achieve this overall goal, we follow the same efficient convolutional capsule paradigm, where projection vectors are formed via a convolution operation with a kernel centered on the parent capsules’ spatial location, but now we learn an additional set of weights for each parent capsule. These learnable weights are the same shape as each parent’s kernel, and represent the offset values for the spatial sampling of child capsules for that parent. Based on the child capsule representation vectors in the previous layer, these weights learn which children a parent capsule should adaptively sample from for a given input image. Dynamic routing then determines how to weight the information coming from each of these children based on their agreement for each projected parent.
Let us give a concrete example to better illustrate the parameter savings of this technique. Given a H = 128 by W = 128 grid of child capsules with ci = 32 capsule types of ai = 8 atoms each, being routed to a set of cj = 10 parent capsule types of aj = 16 atoms each, the fully-connected capsules of Sabour et al. (2017) would requireH×W×ci×ai×cj×aj ⇒ 128×128×32×8×10×16 ≈ 671M parameters for this layer alone (assuming the goal is classification, with detection requiring a multiplicative increase by the detection grid size). Instead, using our proposed deformable capsules with a k2 = 52 kernel, we only require 2× k× k× ai × cj × aj ⇒ 2× 5× 5× 8× 10× 16 ≈ 64K parameters. Convolutional capsules with locally-constrained routing require 32K parameters, not needing the additional spatial offsets kernel, but as mentioned above, they are fundamentally limited in the poses and deformations that they can represent. In our experiments, we found deformable capsules to converge faster and to much higher performance than convolutional capsules.
3 OBJECTS AS CAPSULES: SplitCaps WITH SE-Routing
We propose a novel one-stage (single-shot) capsule network architecture for object detection, called DeformCaps, where objects are detected, classified, and modeled with capsules. Our overall network architecture, shown in Fig. 1, is built upon CenterNet by Zhou et al. (2019a) who proposed to represent objects in images as scalar point values located at the center of their bounding boxes. The authors then regress the remaining characteristics of the object (e.g. height, width, depth) for each center-point detected. In our work, we follow the same center-point detection paradigm, but represent our objects with capsule vectors instead. Since several recent studies have found utilizing a CNN backbone before forming capsule types to be beneficial to overall performance (Duarte et al., 2018; Kosiorek et al., 2019; Tsai et al., 2020), we adopt the preferred backbone of CenterNet, DLA-34 (Yu et al., 2018), as this gave the best trade-off between speed and accuracy. Features extracted from the backbone are sent to our capsule object detection head and a bounding box regression head. In order to perform object detection using capsules, we introduce a new capsule structure, called SplitCaps, composed of class-agnostic and class-presence capsules, and a novel routing algorithm, called SE-Routing.
3.1 SPLITCAPS: CLASS-AGNOSTIC CAPSULES AND CLASS-PRESENCE CAPSULES
As discussed in Section 2, the original CapsNet by Sabour et al. (2017) was extremely expensive in computation and our proposed deformable capsules is a best possible solution to balance non-rigid deformations of objects while remaining memory efficient. However, there is a more significant memory hurdle capsule networks must overcome when scaling up to large-scale datasets, such as MS COCO. In their current implementation, capsule networks represent each class with its own parent
capsule vector. On small scale classification datasets (and using deformable routing), this is not an issue; it amounts to 2× k× k× ai × cj × aj parameters and N × ci × cj × aj × 4 bytes to store the intermediate representations to be routed for the parents, where cj is usually around 10 classes to represent. Let us suppose we have 5× 5 kernels with 32 input capsule types of 8 atoms per capsule, 10 output capsule types of 16 atoms per capsule, and a batch size of 32. In total, we would have 2× 5× 5× 8× 10× 16 = 64K parameters and 32× 32× 10× 16× 4 ≈ 655 KB. When we scale this up to object detection, we now need to store representations for every possible object location, and for MS COCO we need to represent 80 possible classes. This gives us 2× k × k×ai× cj ×aj ⇒ 2×5×5×8×80×16 = 512K parameters and N ×H×W × ci× cj ×aj ×4 bytes⇒ 32× 128× 128× 32× 80× 16× 4 ≈ 86 GB for the intermediate representations, where we assume the output grid of detections is 128 × 128 with a single detection (i.e. bounding box) predicted per class per location. The problem is not any better for large-scale classification datasets such as ImageNet either, where we lose the grid of predictions but grow to 1000 classes, which would require 2× 5× 5× 8× 1000× 16 = 6.4M parameters and 32× 32× 1000× 16× 4 ≈ 66 GB for the intermediate representations. Clearly, with most GPU memories limited to 12–24 GB, a solution is needed to be found for capsule networks to scale up to larger-scale computer vision tasks.
To overcome this issue, we propose a new type of capsule architecture, SplitCaps, to more efficiently scale capsule networks to large-scale computer vision tasks. SplitCaps contains two parent capsule types, each with a different number of atoms per capsule, for each location of the detection grid. As before, the idea is to balance efficiency with the ability to learn powerful representations. Towards this goal, SplitCaps proposes to divide up between its two parent capsules the tasks of (i) learning the instantiation parameters necessary to model the possible variations of an object and (ii) predicting which classes of objects are present in a given input. The first capsule type we refer is our classagnostic object instantiation capsules, and the second we refer is class presence capsules.
Class-agnostic object instantiation capsules: The purpose of these capsules is similar to those in previous works: model the possible variations (in pose, deformation, texture, etc.) of objects within a vector of instantiation parameters, the span of which should cover all possible variations for that object at test (hence why capsules are better than CNNs at generalizing to unseen poses). While previous capsule networks did this in a class-wise manner, we argue such a formulation is not required and possibly it is redundant. Many variations (e.g. rotation, skew, stroke thickness) may be class-independent, and thus to model these variations class-wise would require repetition across each capsule type. Instead, we propose to model all classes within a single capsule type (i.e. class-agnostic). In this way, while class-dependent variations would each require their own dimensions of the vector, class-independent variations can each be modeled along a single dimension for all possible objects. Since it is reasonable to assume there will be at least some class-specific variations, we increase the default capsule vector dimension from 16 to 64 to accommodate for possible class-specific instantiation parameters.
During training, if an object is present at a given spatial location, the 64-dimensional capsule vector for that location is fed to a reconstruction regularization sub-network to construct the mask of that object as similar to the reconstruction regularization used by Sabour et al. (2017) for classification. This sub-network is a relatively small and fast addition: a set of three ReLU-activated 1× 1 convolutional layers with 256 filters each, followed by a final sigmoid-activated 1 × 1 convolution with N = n2 = 282 = 784 filters, before reshaping outputs to n× n. Since objects’ scales vary dramatically, we scale normalize all objects’ ground-truth masks to be 28 × 28 (by following He et al. (2017)). Supervised training is conducted by computing the Dice loss (Milletari et al., 2016) between the predicted reconstruction, r, and the object’s mask, m
Lr = 2 ∑N i rimi∑N
i r 2 i + ∑N i m 2 i , (1)
where Lr is used to provide a regularization signal to the instantiation parameters being learned. Class presence capsules: The class presence capsules attempt to model which classes of objects are present in the input at each spatial location, if any. We accomplish this by setting the atoms per capsule to the number of classes being represented (i.e. 80 for MS COCO). Just as Hinton et al. (2018) separately modeled pose (with a matrix) and activation (with a scalar), this 80-dimensional vector can be viewed as a class-dependent set of activation values. The activation values are then passed through a sigmoid function and thresholded; if one or more activation values are above the
threshold, an object is determined to be at that spatial location with the strongest activated dimension determining the class label.
In order to produce a smooth loss function during training, we create a ground-truth heatmap by fitting a Gaussian distribution rather than a single point, to the center-point of each object’s bounding box, with variance proportional to the size of the box following Zhou et al. (2019a). More specifically, we create a heatmap H ∈ [0, 1]Xd ×Yd ×K containing each down-scaled ground truth center-point p̃ = ( px d , py d ) for class k ∈ K using a Gaussian kernelHxyk = exp ( − (x−p̃x) 2+(y−p̃y)2 2σ2p ) , where d is the amount of downsampling in the network and σp is an object-size-adaptive standard deviation (Law & Deng, 2018). In the case of overlapping Gaussians, we take the element-wise maximum. To handle the large class imbalance between objects and background in our heatmaps, we use a penalty-reduced pixel-wise logistic regression with a focal loss (Lin et al., 2017):
Lh = −1 P ∑ xyk
{ (1− Ĥxyk)α log(Ĥxyk) if Hxyk = 1,
(1−Hxyk)β(Ĥxyk)α log(1− Ĥxyk), otherwise; (2)
where α, β are hyper-parameters of the focal loss, P is the number of center-points in the input, used to normalize all positive focal loss instances to 1 (Zhou et al., 2019a). We use α = 2 and β = 4 in all our experiments, following Law & Deng (2018). At test, to efficiently retrieve the object’s exact center, we run a 3× 3 max-pooling over the thresholded spatial map. To predict the height and width of the bounding boxes of objects and recover the x, y offsets needed to map back to the upscaled image dimensions, we follow the same formulation as Zhou et al. (2019a) and pass the backbone features through a 3× 3 convolutional layer with 256 feature maps, then a 1× 1 convolutional layer with 2 feature maps. These layers predict the local offset, Ô ∈ RWd ×Hd ×2, and size prediction, Ŝ ∈ RWd ×Hd ×2, for each center-point and are supervised by
Lo = 1
P ∑ p ∣∣∣Ôp̃ − (p d − p̃ )∣∣∣ and Ls = 1 P ∑ p ∣∣∣Ŝp − (x2 − x1, y2 − y1)∣∣∣ , (3) respectively. Our final objective function is thus defined as L = Lh + λrLr + λsLs + λoLo. We keep λs = 0.1 and λo = 1 as done in Zhou et al. (2019a), and set λr = 0.1 initially, then step up to λr = 2.0 at the half-way in training.
3.2 SE-ROUTING: SPLITCAPS NEW ROUTING ALGORITHM
As a core component of capsule networks, dynamic routing seeks to maximize the agreement between child capsule projections for parent capsules and to fully-leverage the richer representations being stored. Since SplitCaps introduces a unique capsule head structure, where instantiation parameters and activations are split across different capsule types, previous dynamic routing algorithms can no longer be directly applied. To overcome this, we propose a new dynamic routing algorithm that takes inspiration from Squeeze-and-Excitation networks (Hu et al., 2018), which we call SE-Routing, and illustrated in Fig. 2.
Previously proposed dynamic routing algorithms (e.g. Sabour et al. (2017) and Hinton et al. (2018)) were typically iterative, requiring a hand-tuned loop of routing iterations, which proved to be slow and temperamental in practice. Different studies found different numbers of iterations to be effective, and one meta-study of five different iterative dynamic routing algorithms found them all to be largely ineffective (Paik et al., 2019). To avoid this pitfall, we propose a new routing algorithm to dynamically assign weights to child capsule projections based on their agreement, computed in a single forward pass using a simple gating mechanism with sigmoid activation. Unlike Sabour et al. (2017) which uses a routing softmax to force a one-hot mapping of information from each child to parents, our proposed SE-Routing learns a non-mutually-exclusive relationship between children and parents to allow multiple children to be emphasised for each parent.
Creating child capsule projection descriptors (squeeze): Following the Squeeze-and-Excitation paradigm, we first must compute the squeeze (i.e. a set of descriptors which summarize relevant information about each feature) to create a set of child capsule projection descriptors. In Hu et al. (2018), the authors proposed to use the global average activation of each channel with the goal of modeling channel interdependencies. In this study, our goal is to maximize the agreement
between child projections, for both the instantiation parameters and class presence of the object being modeled. With that motivation, we compute three separate descriptors which are fed into the excitation phase of the routing: (i) cosine angle between the mean projection vector and each child’s projection, which captures object instantiation agreement; (ii) Kullback–Leibler (KL) divergence of each child’s predicted class distribution and an aggregated distribution, which captures class presence agreement; and (iii) variance of each child’s predicted class distribution, which captures class presence uncertainty.
The cosine angle descriptor, a, is calculated in a similar manner to Sabour et al. (2017). A mean projection vector, ũ = 1/N ∑N i ûi, is first computed using the set of child capsule projections, Û = {û1, û2, ..., ûN}. Then we compute a set of cosine angles between each individual projection and this mean, a = {a1, a2, ..., aN}, where ai = (ũ · ûi)/(|ũ| · |ûi|). In a similar fashion, we compute a KL divergence descriptor, b, by first creating an aggregate object class distribution. To create this aggregate distribution, we follow the work of Clemen & Winkler (1999), insofar as each child capsule type is treated as an expert giving its prediction about the true underlying class distribution. First, we compute a simple linear opinion pool (Stone, 1961), p(z̃) =∑N i σs(zi)/N , where p(z̃) is the aggregated probability distribution, Z = {z1, z2, ...,zN} is the
set of child class presence projection vectors, and σs(zi)j = ezij/ ∑K k e
zik for j = {1, ...,K}, i = {1, ..., N} is the softmax function used to transform projection vectors into normalized probability distributions over the K classes. Then, we measure the agreement between each child’s predicted distributions, σs(zi), and the aggregate distribution, p(z̃), as the KL divergence between them bi = ∑K k p(z̃k) log(p(z̃k)/σs(zi)k).
Lastly, we take our child capsules’ predicted distributions, σs(zi), and compute their variance to estimate the uncertainty each child has: ci = ∑K k (σs(zi)k − ∑K k σs(zi)k)
2. Our three sets of descriptors are efficiently computed for all capsules simultaneously (i.e. for entire batch and across spatial locations) on GPU in parallel with matrix operations. They are then concatenated, s = a⊕ b⊕ c, and fed to the excitation layers of our routing mechanism (Fig. 2). Determining routing coefficients (excitation): The excitation stage of the SE-Routing algorithm has the task of learning a mapping from the concatenated set of capsule descriptors, s, into a set
of routing coefficients for the child capsule projections. Since parents capsules types are no longer different classes in our formulation, but rather two separated aspects of modeling objects, we compute a single set of routing coefficients at each spatial location for both parents. Formally, this mapping is computed as r = σ(W2δ(W1s)), where W1 ∈ R 3N t ×3N , W2 ∈ RN× 3N t , δ is the ReLU activation function, σ is the sigmoid activation function, and t is the reduction ratio used to form this mapping into a two fully-connected (FC) layer bottleneck. A brief note: although excitation routing has interesting parallels to self-attention (e.g. dynamically conditioned on the input), our learned mapping is non-mutually-exclusive, while self-attention and CapsNet’s dynamic routing both rely on applying a softmax function over outputs.
Finally, with the determined routing coefficients r = {r1, r2, ..., rN}, we can compute the output of the SplitCaps detection head. Projection vectors from each child to each parent are computed using the proposed deformable capsules (as described in Section 2). These projections are then combined and weighted by the routing coefficients to form the final parent capsules. These final parents contain the instantiation parameters, vobj = ∑N i riûobj|i, and class presence, vcls = ∑N i riûcls|i, of any objects being represented at the given spatial location within the detection grid.
4 DEFORMABLE CAPSULES ON MS COCO
We evaluated our deformable capsule object detection framework on the MS COCO dataset (Lin et al., 2014), which contains 118K training, 5K validation and 20K hold-out testing images. Average precision (AP) is reported over all IOU thresholds and at thresholds 0.5 (AP50) and 0.75 (AP75). We followed the training procedure proposed in Zhou et al. (2019a), training on 512 × 512 pixel inputs, yielding 128× 128 detection grids, using random flip, random scaling (between 0.6 to 1.3), cropping, and color jittering as data augmentation, and Adam (Kingma & Ba, 2014) to optimize our objective function. Due to limited compute resources, we initialized the backbone network weights from CenterNet and only train for 40 epochs with a batch size of 12 and learning rate of 5e-4 with 5× drops at 5, 15, and 25 epochs. Longer training would likely yield superior results, as found by Zhou et al. (2019a) who obtained better results for CenterNet when increasing from 140 to 230 epochs.
In Table 1, we provide results of our proposed deformable capsule network with and without flip and multi-scale augmentations following Zhou et al. (2019a). Inference time on our hardware (Intel Xeon E5-2687 CPU, Titan V GPU, Pytorch 1.2.0, CUDA 10.0, and CUDNN 7.6.5) was consistent with those reported by Zhou et al. (2019a).1 While DeformCaps performs slightly worse than CenterNet in terms of AP, it does so while producing far fewer false positive detections, as shown in Table 2 in Appendix A.2. For ablations, we trained a version of DeformCaps which replaces the proposed deformable capsules with the standard locally-constrained convolutional capsules (nonDeformCaps), and a version which removed the routing procedures (No-Routing). These ablations show the contribution of each component of the proposed method.
1We will make our code publicly available for the community for reproducible research.
5 RELATED WORKS
5.1 CAPSULE NETWORKS
The idea of capsules was first introduced by Hinton et al. (2011). Sabour et al. (2017) extended this and proposed dynamic routing between capsules. The EM routing algorithm was then modified by Hinton et al. (2018). Recently, capsule networks could achieve the state-of-the-art performance for a wide range of applications: video object segmentation (Duarte et al., 2019), point cloud segmentation (Zhao et al., 2019), explainable medical diagnosis (LaLonde et al., 2020), text classification (Zhao et al., 2018), sentiment analysis (Wang et al., 2018), and various other applications (Vijayakumar, 2019).
5.2 OBJECT DETECTION
Region proposal-based approaches: R-CNN was one of the first successful deep object detectors, in which a selective search algorithm was used to select a number of region proposals, CNN features were extracted from each of the region proposals and were used to both classify the object and regress its bounding box (Girshick et al., 2014). The later addition of Fast R-CNN (Girshick, 2015) provided end-to-end training and addressed the speed and efficiency issues of R-CNN.
Anchors-based approaches: Anchors-based approaches sample fixed-shape bounding boxes (anchors) around a low-resolution image grid, then attempt to classify anchors into object classes. Faster R-CNN (Ren et al., 2015) generates region proposals in a first stage network, then attempts to classify and regress bounding boxes for the top-k highest scoring anchors in a second stage network. Later studies such as Redmon et al. (2016) dramatically speed up the process by converting the proposal classifier to a multi-class one-stage detector. Since then, researchers have been working on improving one-stage detectors by including shape priors (Redmon & Farhadi, 2017; 2018), multiple feature resolutions (Liu et al., 2016), re-weighting the loss among different samples (Lin et al., 2017), or modeling channel-wise attention (Chen et al., 2020).
Keypoint estimation-based approaches: CornerNet (Law & Deng, 2018) attempts to detect objects by predicting two bounding box corners as keypoints. ExtremeNet (Zhou et al., 2019b) extends CornerNet’s approach by estimating all corners and the center of the objects’ bounding box. However, these methods rely on significantly slow combinatorial grouping post-processing stage. Zhou et al. (2019a) proposed CenterNet which attempts to predict only an objects’ center point, and regress all other necessary values from there without the need for grouping or post-processing.
6 DISCUSSIONS, LIMITATIONS, & FUTURE WORK
Our proposed deformable capsules (DeformCaps) with SplitCaps object-class representations and Squeeze-and-Excitation inspired SE-Routing algorithm represents an important step for capsule networks to scale-up to large-scale computer vision problems, such as object detection or large-scale classification. Our proposed one-stage object detection capsule network is able to obtain results on MS COCO which are on-par with other state-of-the-art one-stage CNN-based networks for the first time in the literature, while also producing fewer false positives. Examining the qualitative results, provided in Appendix A.3, lends empirical evidence that DeformCaps can better generalize to unusual poses/viewpoints of objects than CenterNet (Zhou et al., 2019a). We hope our work will inspire future research into the considerable potential of capsule networks.
Limitations: Our study contains some limitations, discussed in greater detail in Appendix A.1. Briefly, (1) we had difficulty integrating the bounding box regression values into our capsule object detection head; (2) the choice of descriptors used in the squeeze is somewhat handcrafted, and is open to further investigation; (3) the choice of dimensions to model the class-agnostic instantiation parameters of objects was chosen semi-arbitrarily and could likely improve from fine-search; and (4) the choice of reconstructing objects’ masks versus image patches is not thoroughly explored.
Future directions: The reconstruction sub-network of DeformCaps could possibly be trained to produce a fast single-shot instance segmentation framework. At test, potentially detected objects could have their instantiation vectors reconstructed into objects’ masks, then these masks would simply be resized to the predicted bounding-boxes, similar to He et al. (2017) but without needing to have the initial reshape and ROI alignment required in their two-stage approach.
A APPENDIX
A.1 EXTENDED EXPLANATIONS OF LIMITATIONS AND POSSIBLE RECOMMENDATIONS
In the discussion section of the main body of our paper, we mentioned four potential limitations in our study. We would like to discuss these in a bit more detail here. Since so many components of our method are newly introduced, there is a wide range of choices which could be investigated and improved by future researchers and engineers, and we suggest a few of those here:
(1) We had difficulty in integrating the bounding box regression values into our capsule object detection head. In our implementation, the class-agnostic capsules are trained to predict scalenormalized masks of 28× 28. Ultimately, we would like to integrate predicting the object masks and the boxes for those masks together, as these tasks surely share mutual information. However, to the best of our knowledge, no published works exist for using capsules on a real-valued regression task.
(2) For our proposed SE-Routing, as with the original Squeeze-and-Excitation network, the choice of descriptors computed in the squeeze is somewhat handcrafted. We propose to use the cosine angle, KL divergence, and variance, and provide justifications for each of these choices, then allow the excitation to learn which of these pieces of information is most beneficial dynamically for each given input. Nonetheless, it is completely plausible that different descriptors could yield superior results. We unfortunately do not have the compute resources to run ablation studies over each of these chosen descriptors individually.
(3) The choice of 64 dimensions to model the class-agnostic instantiation parameters was decided somewhat empirically. As we argued in the main paper, it is unlikely that all variations across object poses are completely class independent; thus, to represent these extra dimensions of variation, we increase our vector lengths considerably (16→ 64). However, it is possible that the number of classindependent and class-dependent variations is significantly higher or lower than the value chosen, and largely will depend on the complexity of the data being modeled. This difficulty is analogous to determining the optimal number of convolutional filters to use at every given layer of a CNN. Related to this, there is the potential for the class-dependent dimensions of the instantiation vectors to have unwanted influence over the cosine angle descriptors when attempting to represent objects of other classes. It could be beneficial to pass class information from the class presence capsule type over to the object instantiation capsule type to dynamically attend to the relevant dimensions of its vector for a given object. In a similar manner, it could be beneficial when computing the probability aggregation using the linear opinion pool to weight the expert opinions in proportion to their uncertainty instead of uniformly.
(4) We chose to reconstruct object’s masks with the motivation of forcing the network to learn variations in shape, pose, and deformations. Since CNNs are known to be biased to texture information
over shape, we chose not to explicitly supervise the learning of any texture information. Nonetheless, it is plausible that reconstructing the object with texture could yield superior performance. Further, we chose to set the value of the reconstruction regularization’s contribution to the loss to 0.1, following what was found most beneficial by CenterNet (Zhou et al., 2019a) for weighting the size loss contribution, and from a concern to not over-regularize the network early in training, then stepped this value to 2.0 half-way through training to make its value roughly equal to the other loss terms. From our experience, the accuracy remained fairly consistent across values up to 2.0 for this term, while setting its weight to 0.0 resulted in a degradation of performance. We found that increasing the value during training led to faster improvements in performance, consistent with other works in the literature that use such a regularization term. Engineering efforts on this parameter, such as a temperature function to automatically increase this weight during training, may prove beneficial if the goal is to reach the maximum possible accuracy.
A.2 ANALYSIS OF FALSE POSITIVES
DeformCaps tends to be more conservative with its detections than CenterNet. This can be observed both by the slightly lower confidence scores (typically 0.1 less than CenterNet for most detections), and by the overall fewer amount of boxes placed in scenes. CenterNet tends to produce far more false positives than DeformCaps, both in the case of incorrect detections and of multiple detections for the same object which failed to be suppressed by the NMS algorithm. Though DeformCaps producing slightly lower confidence scores might account for some of the reduction in false positives, we observe CenterNet consistently producing fairly confident false predictions while DeformCaps does not produce a detection in the same region at all (see qualitative examples in Appendix A.3). A quantitative analysis of this is provided in Table 2. These number are generted using the official MS COCO evaluation code in it’s standard operation. However, instead of only returning the average precision (AP) ratio of true positives (TP) and false positives (FP), namely TP/(TP +FP ), we also return the raw FP count as well.
In the default operation of CenterNet, there is no non-maximum suppression (NMS) operation performed. Instead, a sort of pseudo-NMS is performed by passing a 3× 3 Max Pooling operation over the detection grid activations to extract objects’ centers. When running CenterNet with multiscale testing, a NMS operation is then added to effectively choose which scale is the best fit for a given object detection. Therefore, the false positives being seen in Appendix A.3 are a direct results of multiple object centers being predicted incorrectly for the same object or object centers being predicted where there are no objects. We find that DeformCaps, which predicts objects as capsule vectors and not scalar, does not suffer from these former class of FPs.
This observation of less false positive detections is consistent with what we would expect from a capsule network with dynamic routing as compared to a traditional convolutional neural network (CNN). Where a CNN passes on all activations to the next layer, capsule networks utilize a dynamic routing algorithm to only pass on activations if there is agreement amongst the child capsule projections. In our proposed method specifically, with a SplitCaps structure and SE-Routing, the agreement is computed for projections of both the pose and class of the object being represented. It follows naturally that this would limit the amount of false positive detections which are produced, by reducing the amount of activations that get passed on. Further, we find from a survey of these qualitative examples that DeformCaps is better able to detect objects when being presented in an unusual pose or from an usual viewpoint than its CenterNet counterpart. This gives empirical support to one of the purported benefits of capsule networks, to be able to better generalize to unseen poses and viewpoints.
A.3 QUALITATIVE RESULTS ON MS COCO
Included in this section are a number of qualitative examples for CenterNet (Zhou et al., 2019a) and the proposed DeformCaps on the MS COCO test-dev dataset (Lin et al., 2014) using both flip and multi-scale augmentations. Results for CenterNet were obtained using the official code and trained models provided by the authors (Zhou et al., 2019a). While we do not make any sweeping claims, we wish to comment on a few general patterns that seemed to emerge in these examples. In Figure 3, we show a prototypical example of the general trend we are describing. In Figures 4– 5, we include more examples of this trend. In Figures 6– 7, we include a set of interesting examples of unusual object viewpoints or poses being better captured by DeformCaps than by CenterNet. | 1. What are the strengths and weaknesses of the paper regarding its contributions to object detection with capsule architectures?
2. How does the reviewer assess the clarity and sufficiency of the descriptions and presentations of the introduced techniques, particularly deformable capsules and SE-Routing?
3. Does the reviewer find the motivations and problem-solving approaches adequately explained in the introductory part?
4. Are there any confusing aspects or missing information in the figures, specifically Figure 1, that the reviewer would like the authors to clarify or improve?
5. Are there any relevant articles or studies that the reviewer believes the authors should consider or cite in their work? | Review | Review
Pros: It is indeed that capsules have many promising attributes but they are not quite easy to be exploited in object detectors. This paper has addressed many problems existed when applying capsule architectures for the object detection task. In particular, deformable capsules, SplitCaps, and SE-Routing are respectively introduced to help tackle the object detection with capsules. I believe the techniques of this paper will attract interests from researchers in the corresponding areas. The writing is also good.
Cons: There are several points that prevent me from giving a higher rating:
I feel that the descriptions or presentations of introduced techniques are not quite sufficient. In particular, I am confused about the details of deformable capsules. What are the exact operations performed by the deformable capsules? Is it to make parent capsules only aggregate information from a smaller set of their children and the sampling of such a smaller set is implemented by deformable operations? I believe it will be much better to present some detailed formulations or equations to illustrate this operation. Similarly, I also recommend adding detailed equations to describe SE-Routing.
Without sufficient information about operations, I found that the descriptions of motivations are also quite weak, especially in the introductory part. It seems that the authors only mention the problems they will address and the techniques they proposed to address problems. There is not sufficient content briefly explaining why the proposed techniques can tackle the mentioned problems, or at least what are the advantages of the proposed techniques for tackling the problems.
The figures also have many confusing points. For example, in Figure 1, the authors marked that solid red arrows represent 'up' but I can only find dotted red arrows in the figure. Moreover, in what place the modules inside the big blue box is implemented in the detector? After reading, I think it should represent what's inside the SplitCaps, but I do recommend the authors to add some indicative symbols to corresponding related modules.
In addition to the reviewed literature, I found that there are some missing articles that also study how to borrow the capsule concepts for various computer vision tasks. For example: [1] Vijayakumar T. Comparative study of capsule neural network in various applications[J]. Journal of Artificial Intelligence, 2019, 1(01): 19-27. [2] Zhao Y, Birdal T, Deng H, et al. 3D point capsule networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2019: 1009-1018. [3] Chen Z, Zhang J, Tao D. Recursive Context Routing for Object Detection[J]. International Journal of Computer Vision, 2020: 1-19.
Overall, I would like to have some feedback from the authors regarding the above issues before making my final decision. |
ICLR | Title
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Abstract
Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training. Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019a; Sanh, 2019). However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked. In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation. The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements. Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data. One surprising observation is that they have a compound effect even when sequentially applied on the same data. To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available.
1 INTRODUCTION
Self-supervised learning on a general-domain text corpus followed by end-task learning is the twostaged training approach that enabled deep-and-wide Transformer-based networks (Vaswani et al., 2017) to advance language understanding (Devlin et al., 2018; Yang et al., 2019b; Sun et al., 2019b; Liu et al., 2019). However, state-of-the-art models have hundreds of millions of parameters, incurring a high computational cost. Our goal is to realize their gains under a restricted memory and latency budget. We seek a training method that is well-performing, general and simple and can leverage additional resources such as unlabeled task data.
Before considering compression techniques, we start with the following research question: Could we directly train small models using the same two-staged approach? In other words, we explore the idea of applying language model (LM) pre-training and task fine-tuning to compact architectures directly. This simple baseline has so far been overlooked by the NLP community, potentially based on an underlying assumption that the limited capacity of compact models is capitalized better when focusing on the end task rather than a general language model objective. Concurrent work to ours proposes variations of the standard pre-training+fine-tuning procedure, but with limited generality (Sun et al., 2019a; Sanh, 2019). We make the surprising finding that pre-training+fine-tuning in its original formulation is a competitive method for building compact models.
For further gains, we additionally leverage knowledge distillation (Hinton et al., 2015), the standard technique for model compression. A compact student is trained to recover the predictions of a highly accurate teacher. In addition to the posited regularization effect of these soft labels (Hinton et al., 2015), distillation provides a means of producing pseudo-labels for unlabeled data. By regarding LM pre-training of compact models as a student initialization strategy, we can take advantage of both methods. The resulting algorithm is a sequence of three standard training operations: masked LM (MLM) pre-training (Devlin et al., 2018), task-specific distillation, and optional fine-tuning. From here on, we will refer to it as Pre-trained Distillation (PD) (Figure 1). As we will show in
Algorithm 1 Require: student θ, teacher Ω, unlabeled LM data DLM , unlabeled transfer dataDT , labeled dataDL
1: Initialize θ by pre-training an MLM+ on DLM 2: for each x ∈ DT do 3: Get loss L← − ∑ y PΩ(y|x) logPθ(y|x) 4: Update student θ ← BACKPROP(L, θ) 5: end for 6: Fine-tune θ on DL . Optional step. 7: return θ
Pre-training
Distillation
Fine-tuning (Optional)
Compact Model
Unlabeled LM data
Unlabeled transfer data
Labeled data
Large teacher
Final Compact Model
Figure 1: Pre-trained Distillation
Section 6.2, PD outperforms the pre-training+fine-tuning (PF) baseline, especially in the presence of a large transfer set for distillation.
In a controlled study following data and model architecture settings in concurrent work (Section 4), we show that Pre-trained Distillation outperforms or is competitive with more elaborate approaches which use either more sophisticated distillation of task knowledge (Sun et al., 2019a) or more sophisticated pre-training from unlabeled text (Sanh, 2019). The former distill task knowledge from intermediate teacher activations, starting with a heuristically initialized student. The latter fine-tune a compact model that is pre-trained on unlabeled text with the help of a larger LM teacher.
One of the most noteworthy contributions of our paper are the extensive experiments that examine how Pre-trained Distillation and its baselines perform under various conditions. We investigate two axes that have been under-studied in previous work: model size and amount/quality of unlabeled data. While experimenting with 24 models of various sizes (4m to 110m parameters) and depth/width trade-offs, we observe that pre-trained students can leverage depth much better than width; in contrast, this property is not visible for randomly-initialized models. For the second axis, we vary the amount of unlabeled data, as well as its similarity to the labeled set. Interestingly, Pretrained Distillation is more robust to these variations in the transfer set than standard distillation.
Finally, in order to gain insight into the interaction between LM pre-training and task-specific distillation, we sequentially apply these operations on the same dataset. In this experiment, chaining the two operations performs better than any one of them applied in isolation, despite the fact that a single dataset was used for both steps. This compounding effect is surprising, indicating that pre-training and distillation are learning complementary aspects of the data.
Given the effectiveness of LM pre-training on compact architectures, we will make our 24 pretrained miniature BERT models publicly available in order to accelerate future research.
2 PROBLEM STATEMENT
Our high-level goal is to build accurate models which fit a given memory and latency budget. There are many aspects to explore: the parametric form of the compact model (architecture, number of parameters, trade-off between number of hidden layers and embedding size), the training data (size, distribution, presence or absence of labels, training objective), etc. Since an exhaustive search over this space is impractical, we fix the model architecture to bidirectional Transformers, known to be suitable for a wide range of NLP tasks (Vaswani et al., 2017; Devlin et al., 2018). The rest of this section elaborates on the training resources we assume to have at our disposal.
The teacher is a highly accurate but large model for an end task, that does not meet the resource constraints. Prior work on distillation often makes use of an ensemble of networks (Hinton et al., 2015). For faster experimentation, we use a single teacher, without making a statement about the best architectural choice. In Section 4, the teacher is pre-trained BERTBASE fine-tuned on labeled end-task data. In Section 6, we use BERTLARGE instead.
Students are compact models that satisfy resource constraints. Since model size qualifiers are relative (e.g., what is considered small in a data center can be impractically large on a mobile device),
we investigate an array of 24 model sizes, from our TransformerTINY (4m parameters) all the way up to TransformerBASE (110m parameters)1. The student model sizes and their relative speed-up compared to the BERTLARGE teacher can be found in Table 1. Interested readers can situate themselves on this spectrum based on their resource constraints. For readability, most plots show a selection of 5 models, but we verify that our conclusions hold for all 24.
Labeled data (DL) is a set of N training examples {(x1, y1), ..., (xN , yN )}, where xi is an input and yi is a label. For most NLP tasks, labeled sets are hard to produce and thus restricted in size.
Unlabeled transfer data (DT ) is a set ofM input examples of the form {x′1, ..., x′M} sampled from a distribution that is similar to but possibly not identical to the input distribution of the labeled set. During distillation, the teacher transfers knowledge to the student by exposing its label predictions for instances x′m. DT can also include the input portion of labeled data DL instances. Due to the lack of true labels, such sets are generally easier to produce and consequently larger than labeled ones. Note, however, that task-relevant input text is not readily available for key tasks requiring paired texts such as natural language inference and question answering, as well as domain-specific dialog understanding. In addition, for deployed systems, input data distribution shifts over time and existing unlabeled data becomes stale (Kim et al., 2017).
Unlabeled language model data (DLM ) is a collection of natural language texts that enable unsupervised learning of text representations. We use it for unsupervised pre-training with a masked language model objective (Devlin et al., 2018). Because no labels are needed and strong domain similarity is not required, these corpora are often vast, containing thousands of millions of words.
The distinction between the three types of datasets is strictly functional. Note they are not necessarily disjunct. For instance, the same corpus that forms the labeled data can also be part of the unlabeled transfer set, after its labels are discarded. Similarly, corpora that are included in the transfer set can also be used as unlabeled LM data.
3 PRE-TRAINED DISTILLATION
Pre-trained Distillation (PD) (Figure 1) is a general, yet simple algorithm for building compact models that can leverage all the resources enumerated in Section 2. It consists of a sequence of three standard training operations that can be applied to any choice of architecture:
1. Pre-training on DLM . A compact model is trained with a masked LM objective (Devlin et al., 2018), capturing linguistic phenomena from a large corpus of natural language texts.
2. Distillation on DT . This well-read student is now prepared to take full advantage of the teacher expertise, and is trained on the soft labels (predictive distribution) produced by the teacher. As we will show in Section 6.2, randomly initialized distillation is constrained by the size and distribution of its unlabeled transfer set. However, the previous pre-training step mitigates to some extent the negative effects caused by an imperfect transfer set.
3. (Optional) fine-tuning on DL. This step makes the model robust to potential mismatches between the distribution of the transfer and labeled sets. We will refer to the two-step algorithm as PD, and to the three-step algorithm as PDF.
1Note that our TransformerBASE and BERTBASE in Devlin et al. (2018) have the same architecture. We use the former term for clarity, since not all students in Section 6 are pre-trained.
Figure 3: Pre-trained Distillation (PD) and concurrent work on model compression.
While we are treating our large teachers as black boxes, it is worth noting that they are produced by pre-training and fine-tuning. Since the teacher could potentially transfer the knowledge it has obtained via pre-training to the student through distillation, it is a priori unclear whether pre-training the student would bring additional benefits. As Section 6.2 shows, pre-training students is surprisingly important, even when millions of samples are available for transfer.
4 COMPARISON TO CONCURRENT WORK
There are concurrent efforts to ours aiming to leverage both pre-training and distillation in the context of building compact models. Though inspired by the two-stage pre-training+fine-tuning approach that enabled deep-and-wide architectures to advance the state-of-the-art in language understanding, they depart from this traditional method in several key ways.
Patient Knowledge Distillation (Sun et al., 2019a) initializes a student from the bottom layers of a deeper pre-trained model, then performs task-specific patient distillation. The training objective relies not only on the teacher output, but also on its intermediate layers, thus making assumptions about the student and teacher architectures. In a parallel line of work, DistilBert (Sanh, 2019) applies the same truncation-based initialization method for the student, then continues its LM pre-training via distillation from a more expensive LM teacher, and finally fine-tunes on task data. Its downside is that LM distillation is computationally expensive, as it requires a softmax operation over the entire vocabulary to compute the expensive LM teacher’s predictive distribution. A common limitation in both studies is that the initialization strategy constrains the student to the teacher embedding size. Table 2 summarizes the differences between concurrent work and Pre-trained Distillation (PD).
To facilitate direct comparison, in this section we perform an experiment with the same model architecture, sizes and dataset settings used in the two studies mentioned above. We perform Pretrained Distillation on a 6-layer BERT student with task supervision from a 12-layer BERTBASE teacher, using embedding size 768 for both models. For distillation, our transfer set coincides with
the labeled set (DT =DL). Table 3 reports results on the 6 GLUE tasks selected by Sun et al. (2019a) and shows that, on average, PD performs best. For anchoring, we also provide quality numbers for pre-training+fine-tuning (PF), which is surprisingly competitive to the more elaborate alternatives in this setting where DT is not larger than DL. Remarkably, PF does not compromise generality or simplicity for quality. Its downside is, however, that it cannot leverage unlabeled task data and teacher model predictions.
5 ANALYSIS SETTINGS
Given these positive results, we aim to gain more insight into Pre-trained Distillation. We perform extensive analyses on two orthogonal axes—model sizes and properties of unlabeled data, thus departing from the settings used in Section 4.
All our models follow the Transformer architecture (Vaswani et al., 2017) and input processing used in BERT (Devlin et al., 2018). We denote the number of hidden layers as L and the hidden embedding size as H , and refer to models by their L/H dimensions. We always fix the number of self-attention heads to H/64 and the feed-forward/filter size to 4H . The end-task models are obtained by stacking a linear classifier on top of the Transformer architectures.
The teacher, BERTLARGE, has dimensions 24L/1024H and 340M parameters. We experiment with 24 student models, with sizes and relative latencies listed in Table 1. The most expensive student, TransformerBASE, is 3 times smaller and 1.25 times faster than the teacher; the cheapest student, TransformerSMALL, is 77 times smaller and 65 times faster. For readability, we report results on a selection of 5 students, but verify that all conclusions hold across the entire 24-model grid.
5.1 ANALYSIS BASELINES
We select three baselines for Pre-trained Distillation that can provide insights into the contributions made by each of its constituent operations.
Basic Training (Figure 4a) is the standard supervised learning method: a compact model is trained directly on the labeled set.
Knowledge Distillation (Figure 4b) (Bucilă et al., 2006; Hinton et al., 2015) (or simply “distillation”) transfers information from a highly-parameterized and accurate teacher model to a more compact and thus less expressive student. For classification tasks, distillation exposes the student to soft labels, namely the class probabilities produced by the teacher pl = softmax(zl/T ), where pl is the output probability for class l, zl is the logit for class l, and T is a constant called temperature that controls the smoothness of the output distribution. The softness of the labels enables better generalization than the gold hard labels. For each end task, we train: (i) a teacher obtained by fine-tuning pre-trained BERTLARGE (24L/1024H) on the labeled dataset (note teachers do not learn from the transfer set), and (ii) 24 students of various sizes. Students are always distilled on the soft labels produced by the teacher with a temperature of 12.
Pre-training+Fine-tuning (Figure 4c) (Dai & Le, 2015; Devlin et al., 2018), or simply PF, leverages large unlabeled general-domain corpora to pre-train models that can be fine-tuned for end tasks.
2While Hinton et al. (2015) show that tuning the temperature could increase performance, we did not observe notable gains. They also propose using a weighted sum of the soft and hard labels, but this approach cannot be applied directly in our set-up, since not all instances in our unlabeled transfers set have hard labels. Our optional final fine-tuning step similarly up-weights the hard labels in the labeled set.
Following BERT, we perform pre-training with the masked LM (MLM) and next sentence objectives (collectively referred to as MLM+ from here on). The resulting model is fine-tuned on end-task labeled data. While pre-training large models has been shown to provide substantial benefits, we are unaware of any prior work systematically studying its effectiveness on compact architectures.
5.2 ANALYSIS TASKS AND DATASETS
The tasks and associated datasets are summarized in Table 4.
Sentiment classification aims to classify text according to the polarities of opinions it contains. We perform 3-way document classification on Amazon Book Reviews (He & McAuley, 2016). Its considerable size (8m) allows us to closely follow the standard distillation setting, where there is a large number of unlabeled examples for transfer. Additionally, we test our algorithm on SST-2 (Socher et al., 2013),
which is a binary sentence classification task, and our results are directly comparable with prior work on the GLUE leaderboard (Wang et al., 2018). We use whole documents from Amazon Movie Reviews (1.7m) as unlabeled transfer data (note that SST-2 consists of single sentences).
Natural language inference involves classifying pairs of sentences (a premise and a hypothesis) as entailment, contradiction, or neutral. This task is representative of the scenario in which proxy data is non-trivial to gather (Gururangan et al., 2018). We chose MNLI (Williams et al., 2018) as our target dataset. Since strictly in-domain data is difficult to obtain, we supplement DT with two other sentence-pair datasets: SNLI (Bowman et al., 2015) and QQP (Chen et al., 2018).
Textual entailment is similar to NLI, but restricted to binary classification (entailment vs nonentailment). The most popular RTE dataset (Bentivogli et al., 2009) is two orders of magnitude smaller than MNLI and offers an extreme test of robustness to the amount of transfer data.
6 ANALYSIS
In this section, we conduct experiments that help us understand why Pre-trained Distillation is successful and how to attribute credit to its constituent operations.
6.1 THERE ARE NO SHORTCUTS: WHY FULL PRE-TRAINING IS NECESSARY
As later elaborated in Section 7, earlier efforts to leverage pre-training in the context of compact models simply feed pre-trained (possibly contextual) input representations into randomly-initialized students (Hu et al., 2018; Chia et al., 2018; Tang et al., 2019). Concurrent work initializes shallowand-wide students from the bottom layers of their deeper pre-trained counterparts (Yang et al., 2019a; Sun et al., 2019a). The experiments below indicate these strategies are suboptimal, and that LM pre-training is necessary in order to unlock the full student potential.
Is it enough to pre-train word embeddings? No. In order to prove that pre-training Transformer layers is important, we compare two flavors of Pre-trained Distillation3: PD with pre-trained word embeddings and PD with pre-trained word embeddings and Transformer layers. We produce wordpiece embeddings by pre-training one-layer Transformers for each embedding size. We then discard the single Transformer layer and keep the embeddings to initialize our students.
For MNLI (Figure 5), less than 24% of the gains PD brings over distillation can be attributed to the pre-trained word embeddings (for TransformerTINY, this drops even lower, to 5%). The rest of the benefits come from additionally pre-training the Transformer layers.
3Note that, for both flavors of PD, none of the student parameters are frozen; the word embeddings do get updated during distillation.
Tiny 2L/128H Mini 4L/256H Small 4L/512H Medium 8L/512H Base 12L/768H
60
70
80
MNLI
PD (MLM pre-training) PD (from 12-layer model) PD (word embeddings) Distillation
Figure 5: Pre-training outperforms truncation. Students initialized via LM pre-training (green) outperform those initialized from the bottom layers of 12-layer pre-trained models (gray). When only word embeddings are pre-trained (red), performance is degraded even further.
Is it worse to truncate deep pre-trained models? Yes, especially for shallow students. Given that pre-training is an expensive process, an exhaustive search over model sizes in the pursuit of the one that meets a certain performance threshold can be impractical. Instead of pre-training all (number of layers, embedding size) combinations of students, one way of short-cutting the process is to pre-train a single deep (e.g. 12-layer) student for each embedding size, then truncate it at various heights. Figure 5 shows that this can be detrimental especially to shallow architectures; TransformerTINY loses more than 73% of the pre-training gains over distillation. As expected, losses fade away as the number of layers increases.
What is the best student for a fixed parameter size budget? As a rule of thumb, prioritize depth over width, especially with pre-trained students. Figure 6 presents a comparison between 24 student model architectures on SST-2, demonstrating how well different students utilize model capacity. They are sorted first by the hidden size, then by the number of layers. This roughly corresponds to a monotonic increase in the number of parameters, with a few exceptions for the largest students. The quality of randomly initialized students (i.e. basic training and distillation) is closely correlated with the number of parameters. With pre-training (i.e. PD and PF), we observe two intuitive findings: (1) pre-trained models are much more effective at using more parameters, and (2) pre-trained models are particularly efficient at utilizing depth, as indicated by the sharp drops in performance when moving to wider but shallower models.
This is yet another argument against initialization via truncation: for instance, truncating the bottom two layers of BERTBASE would lead to a suboptimal distribution of parameters: the 2L/768H model (39.2m parameters) is dramatically worse than e.g. 6L/512H (35.4m parameters).
6.2 UNDER THE HOOD: DISSECTING PRE-TRAINED DISTILLATION
In the previous section, we presented empirical evidence for the importance of the initial LM pretraining step. In this section, we show that distillation brings additional value, especially in the presence of a considerably-sized transfer set, and that fine-tuning ensures robustness when the unlabeled data diverges from the labeled set.
Comparison to analysis baselines First, we quantify how much Pre-trained Distillation improves upon its constituent operations applied in isolation. We compare it against the baselines established in Section 5.1 (basic training, distillation, and pre-training+fine-tuning) on the three NLP tasks
described in Section 5.2. We use the BookCorpus (Zhu et al., 2015) and English Wikipedia as our unlabeled LM set, following the same pre-training procedure as Devlin et al. (2018).
Results in Figure 7 confirm that PD outperforms these baselines, with particularly remarkable results on the Amazon Book Reviews corpus, where TransformerMINI recovers the accuracy of the teacher at a 31x decrease in model size and 16x speed-up. Distillation achieves the same performance with TransformerBASE, which is 10x larger than TransformerMINI. Thus PD can compress the model more effectively than distillation. On RTE, Pre-trained Distillation improves TransformerTINY by more than 5% absolute over the closest baseline (pre-training+fine-tuning) and is the only method to recover teacher accuracy with TransformerBASE.
It is interesting to note that the performance of the baseline systems is closely related to the size of the transfer set. For the sentence-pair tasks such as MNLI and RTE, where the size of the transfer set is moderate (1.3m) and slightly out-of-domain (see Table 4), pre-training+fine-tuning out-performs distillation across all student sizes, with an average of 12% for MNLI and 8% on RTE. Interestingly, the order is inverted on Amazon Book Reviews, where the large transfer set (8m) is strictly indomain: distillation is better than pre-training+fine-tuning by an average of 3%. On the other hand, Pre-trained Distillation is consistently best in all cases. We will examine the robustness of Pre-trained Distillation in the rest of the section.
Robustness to transfer set size It is generally accepted that distillation is reliant upon a large transfer set. For instance, distillation for speech recognition is performed on hundreds of millions of data points (Li et al., 2014; Hinton et al., 2015).
We reaffirm this statement through experiments on Amazon Book Reviews in Figure 8, given that Amazon Book Reviews have the biggest transfer set. Distillation barely recovers teacher accuracy with the largest student (TransformerBASE), using the entire 8m transfer set. When there is only 1m transfer set, the performance is 4% behind the teacher model. In contrast, PD achieves the same performance with TransformerMINI on 5m instances. In other words, PD can match the teacher model with 10x smaller model and 1.5x less transfer data, compared to distillation.
Robustness to domain shift To the best of our knowledge, there is no prior work that explicitly studies how distillation is impacted by the mismatch between training and transfer sets (which we will refer to as domain shift). Many previous distillation efforts focus on tasks where the two sets come from the same distribution (Romero et al., 2014; Hinton et al., 2015), while others simply acknowledge the importance of and strive for a close match between them (Bucilă et al., 2006).
We provide empirical evidence that out-of-domain data degrades distillation and that our algorithm is more robust to mismatches between DL and DT . We measure domain shift using the Spearman rank correlation coefficient (which we refer to as Spearman or simply S), introduced as a general metric in (Spearman, 1904) and first used as a corpus similarity metric in (Johansson et al., 1989). To compute corpus similarity, we follow the procedure described in (Kilgarriff & Rose, 1998): for two datasets X and Y , we compute the corresponding frequency ranks FX and FY of their most
common n = 100 words. For each of these words, the difference d between ranks in FX and FY is computed. The final statistic is given by the following formula: 1− ∑100 i=1 d 2 i /(n(n
2 − 1)). To measure the effect of domain shift, we again experiment on the Amazon Book Reviews task. Instead of varying the size of the transfer sets, this time we keep size fixed (to 1.7m documents) and vary the source of the unlabeled text used for distillation. Transfer set domains vary from not task-related (paragraphs from Wikipedia with S=0.43), to reviews for products of unrelated category (electronics reviews with S=0.52), followed by reviews from a related category (movie reviews with S=0.76), and finally in-domain book reviews (S=1.0). Results in Figure 9 show a direct correlation between accuracy and the Spearman coefficient for both distillation and PD. When S drops to 0.43, distillation on DT is 1.8% worse than basic training on DL, whereas PD suffers a smaller loss over pre-training+fine-tuning, and a gain of about 1.5% when a final fine-tuning step is added. When reviews from an unrelated product are used as a transfer set (S=0.52), PD obtains a much larger gain from learning from the teacher, compared to distillation.
6.3 BETTER TOGETHER: THE COMPOUND EFFECT OF PRE-TRAINING AND DISTILLATION
We investigate the interaction between pretraining and distillation by applying them sequentially on the same data. We compare the following two algorithms: Pre-training+Finetuning with DLM = X and Pre-trained Distillation with DLM = DT = X . Any additional gains that the latter brings over the former must be attributed to distillation, providing evidence that the compound effect still exists.
For MNLI, we setDLM =DT = NLI* and continue the experiment above by taking the students pre-trained on DLM = NLI* and distilling them on DT = NLI*. As shown in Figure 10, PD is better than PF by 2.2% on average over all student sizes. Note that even when pretraining and then distilling on the same data, PD outperforms the two training strategies applied in isolation. The two methods are thus
learning different linguistic aspects, both useful for the end task.
7 RELATED WORK
Pre-training Decades of research have shown that unlabeled text can help learn language representations. Word embeddings were first used (Mikolov et al., 2013; Pennington et al., 2014), while subsequently contextual word representations were found more effective (Peters et al., 2018). Most recently, research has shifted towards fine-tuning methods (Radford et al., 2018; Devlin et al., 2018; Radford et al., 2019), where entire large pre-trained representations are fine-tuned for end tasks together with a small number of task-specific parameters. While feature-based unsupervised representations have been successfully used in compact models (Johnson & Zhang, 2015; Gururangan et al., 2019), inter alia, the pretraining+fine-tuning approach has not been studied in depth for such small models.
Learning compact models In this work we built on model compression (Bucilă et al., 2006) and its variant knowledge distillation (Hinton et al., 2015). Other related efforts introduced ways to transfer more information from a teacher to a student model, by sharing intermediate layer activations (Romero et al., 2014; Yim et al., 2017; Sun et al., 2019a). We experimented with related approaches, but found only slight gains which were dominated by the gains from pre-training and were not complementary. Prior works have also noted the unavailability of in-domain large-scale transfer data and proposed the use of automatically generated pseudo-examples (Bucilă et al., 2006; Kimura et al., 2018). Here we showed that large-scale general domain text can be successfully used for pre-training instead. A separate line of work uses pruning or quantization to derive smaller models (Han et al., 2016; Gupta et al., 2015). Gains from such techniques are expected to be complementary to PD.
Distillation with unsupervised pre-training Early efforts to leverage both unsupervised pretraining and distillation provide pre-trained (possibly contextual) word embeddings as inputs to students, rather than pre-training the student stack. For instance, Hu et al. (2018) use ELMo embeddings, while (Chia et al., 2018; Tang et al., 2019) use context-independent word embeddings. Concurrent work initializes Transformer students from the bottom layers of a 12-layer BERT model (Yang et al., 2019a; Sun et al., 2019a; Sanh, 2019). The latter continues student LM pre-training via distillation from a more expensive LM teacher. For a different purpose of deriving a single model for multiple tasks through distillation, Clark et al. (2019) use a pre-trained student model of the same size as multiple teacher models. However, none of the prior work has analyzed the impact of unsupervised learning for students in relation to the model size and domain of the transfer set.
8 CONCLUSION
We conducted extensive experiments to gain understanding of how knowledge distillation and the pre-training+fine-tuning algorithm work in isolation, and how they interact. We made the finding that their benefits compound, and unveiled the power of Pre-trained Distillation, a simple yet effective method to maximize the utilization of all available resources: a powerful teacher, and multiple sources of data (labeled sets, unlabeled transfer sets, and unlabeled LM sets). | 1. What is the focus of the paper regarding language models and distillation?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to prior works?
3. How does the reviewer assess the significance and novelty of the work?
4. Are there any concerns or suggestions regarding the experimental results and their presentation?
5. Is there anything else the reviewer would like to know or discuss about the paper? | Review | Review
The authors investigate the problem of training compact pre-trained language model via distillation. Their method consists of three steps:
1. pre-train the compact model LM
2. distill the compact model LM with a larger model (teacher)
3. fine-tune the compact model on target task
This idea is not significantly new since it is quite common to apply distillation to compress models, and the results are largely empirical. From Table 3 the results on test sets are better than previous works, but not by much. The authors spend quite a of space on ablation studies to investigate the contribution of different factors, and on cross-domain transfers. They do manage to show that using a teacher for distilling a compact student model does better than directly pre-training a compact model on the NLI* task in section 6.3. It would be better if they could show it for other tasks on the benchmark as well.
Overall I think this work is somewhat incremental, and falls below the acceptance threshold. |
ICLR | Title
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Abstract
Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training. Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019a; Sanh, 2019). However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked. In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation. The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements. Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data. One surprising observation is that they have a compound effect even when sequentially applied on the same data. To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available.
1 INTRODUCTION
Self-supervised learning on a general-domain text corpus followed by end-task learning is the twostaged training approach that enabled deep-and-wide Transformer-based networks (Vaswani et al., 2017) to advance language understanding (Devlin et al., 2018; Yang et al., 2019b; Sun et al., 2019b; Liu et al., 2019). However, state-of-the-art models have hundreds of millions of parameters, incurring a high computational cost. Our goal is to realize their gains under a restricted memory and latency budget. We seek a training method that is well-performing, general and simple and can leverage additional resources such as unlabeled task data.
Before considering compression techniques, we start with the following research question: Could we directly train small models using the same two-staged approach? In other words, we explore the idea of applying language model (LM) pre-training and task fine-tuning to compact architectures directly. This simple baseline has so far been overlooked by the NLP community, potentially based on an underlying assumption that the limited capacity of compact models is capitalized better when focusing on the end task rather than a general language model objective. Concurrent work to ours proposes variations of the standard pre-training+fine-tuning procedure, but with limited generality (Sun et al., 2019a; Sanh, 2019). We make the surprising finding that pre-training+fine-tuning in its original formulation is a competitive method for building compact models.
For further gains, we additionally leverage knowledge distillation (Hinton et al., 2015), the standard technique for model compression. A compact student is trained to recover the predictions of a highly accurate teacher. In addition to the posited regularization effect of these soft labels (Hinton et al., 2015), distillation provides a means of producing pseudo-labels for unlabeled data. By regarding LM pre-training of compact models as a student initialization strategy, we can take advantage of both methods. The resulting algorithm is a sequence of three standard training operations: masked LM (MLM) pre-training (Devlin et al., 2018), task-specific distillation, and optional fine-tuning. From here on, we will refer to it as Pre-trained Distillation (PD) (Figure 1). As we will show in
Algorithm 1 Require: student θ, teacher Ω, unlabeled LM data DLM , unlabeled transfer dataDT , labeled dataDL
1: Initialize θ by pre-training an MLM+ on DLM 2: for each x ∈ DT do 3: Get loss L← − ∑ y PΩ(y|x) logPθ(y|x) 4: Update student θ ← BACKPROP(L, θ) 5: end for 6: Fine-tune θ on DL . Optional step. 7: return θ
Pre-training
Distillation
Fine-tuning (Optional)
Compact Model
Unlabeled LM data
Unlabeled transfer data
Labeled data
Large teacher
Final Compact Model
Figure 1: Pre-trained Distillation
Section 6.2, PD outperforms the pre-training+fine-tuning (PF) baseline, especially in the presence of a large transfer set for distillation.
In a controlled study following data and model architecture settings in concurrent work (Section 4), we show that Pre-trained Distillation outperforms or is competitive with more elaborate approaches which use either more sophisticated distillation of task knowledge (Sun et al., 2019a) or more sophisticated pre-training from unlabeled text (Sanh, 2019). The former distill task knowledge from intermediate teacher activations, starting with a heuristically initialized student. The latter fine-tune a compact model that is pre-trained on unlabeled text with the help of a larger LM teacher.
One of the most noteworthy contributions of our paper are the extensive experiments that examine how Pre-trained Distillation and its baselines perform under various conditions. We investigate two axes that have been under-studied in previous work: model size and amount/quality of unlabeled data. While experimenting with 24 models of various sizes (4m to 110m parameters) and depth/width trade-offs, we observe that pre-trained students can leverage depth much better than width; in contrast, this property is not visible for randomly-initialized models. For the second axis, we vary the amount of unlabeled data, as well as its similarity to the labeled set. Interestingly, Pretrained Distillation is more robust to these variations in the transfer set than standard distillation.
Finally, in order to gain insight into the interaction between LM pre-training and task-specific distillation, we sequentially apply these operations on the same dataset. In this experiment, chaining the two operations performs better than any one of them applied in isolation, despite the fact that a single dataset was used for both steps. This compounding effect is surprising, indicating that pre-training and distillation are learning complementary aspects of the data.
Given the effectiveness of LM pre-training on compact architectures, we will make our 24 pretrained miniature BERT models publicly available in order to accelerate future research.
2 PROBLEM STATEMENT
Our high-level goal is to build accurate models which fit a given memory and latency budget. There are many aspects to explore: the parametric form of the compact model (architecture, number of parameters, trade-off between number of hidden layers and embedding size), the training data (size, distribution, presence or absence of labels, training objective), etc. Since an exhaustive search over this space is impractical, we fix the model architecture to bidirectional Transformers, known to be suitable for a wide range of NLP tasks (Vaswani et al., 2017; Devlin et al., 2018). The rest of this section elaborates on the training resources we assume to have at our disposal.
The teacher is a highly accurate but large model for an end task, that does not meet the resource constraints. Prior work on distillation often makes use of an ensemble of networks (Hinton et al., 2015). For faster experimentation, we use a single teacher, without making a statement about the best architectural choice. In Section 4, the teacher is pre-trained BERTBASE fine-tuned on labeled end-task data. In Section 6, we use BERTLARGE instead.
Students are compact models that satisfy resource constraints. Since model size qualifiers are relative (e.g., what is considered small in a data center can be impractically large on a mobile device),
we investigate an array of 24 model sizes, from our TransformerTINY (4m parameters) all the way up to TransformerBASE (110m parameters)1. The student model sizes and their relative speed-up compared to the BERTLARGE teacher can be found in Table 1. Interested readers can situate themselves on this spectrum based on their resource constraints. For readability, most plots show a selection of 5 models, but we verify that our conclusions hold for all 24.
Labeled data (DL) is a set of N training examples {(x1, y1), ..., (xN , yN )}, where xi is an input and yi is a label. For most NLP tasks, labeled sets are hard to produce and thus restricted in size.
Unlabeled transfer data (DT ) is a set ofM input examples of the form {x′1, ..., x′M} sampled from a distribution that is similar to but possibly not identical to the input distribution of the labeled set. During distillation, the teacher transfers knowledge to the student by exposing its label predictions for instances x′m. DT can also include the input portion of labeled data DL instances. Due to the lack of true labels, such sets are generally easier to produce and consequently larger than labeled ones. Note, however, that task-relevant input text is not readily available for key tasks requiring paired texts such as natural language inference and question answering, as well as domain-specific dialog understanding. In addition, for deployed systems, input data distribution shifts over time and existing unlabeled data becomes stale (Kim et al., 2017).
Unlabeled language model data (DLM ) is a collection of natural language texts that enable unsupervised learning of text representations. We use it for unsupervised pre-training with a masked language model objective (Devlin et al., 2018). Because no labels are needed and strong domain similarity is not required, these corpora are often vast, containing thousands of millions of words.
The distinction between the three types of datasets is strictly functional. Note they are not necessarily disjunct. For instance, the same corpus that forms the labeled data can also be part of the unlabeled transfer set, after its labels are discarded. Similarly, corpora that are included in the transfer set can also be used as unlabeled LM data.
3 PRE-TRAINED DISTILLATION
Pre-trained Distillation (PD) (Figure 1) is a general, yet simple algorithm for building compact models that can leverage all the resources enumerated in Section 2. It consists of a sequence of three standard training operations that can be applied to any choice of architecture:
1. Pre-training on DLM . A compact model is trained with a masked LM objective (Devlin et al., 2018), capturing linguistic phenomena from a large corpus of natural language texts.
2. Distillation on DT . This well-read student is now prepared to take full advantage of the teacher expertise, and is trained on the soft labels (predictive distribution) produced by the teacher. As we will show in Section 6.2, randomly initialized distillation is constrained by the size and distribution of its unlabeled transfer set. However, the previous pre-training step mitigates to some extent the negative effects caused by an imperfect transfer set.
3. (Optional) fine-tuning on DL. This step makes the model robust to potential mismatches between the distribution of the transfer and labeled sets. We will refer to the two-step algorithm as PD, and to the three-step algorithm as PDF.
1Note that our TransformerBASE and BERTBASE in Devlin et al. (2018) have the same architecture. We use the former term for clarity, since not all students in Section 6 are pre-trained.
Figure 3: Pre-trained Distillation (PD) and concurrent work on model compression.
While we are treating our large teachers as black boxes, it is worth noting that they are produced by pre-training and fine-tuning. Since the teacher could potentially transfer the knowledge it has obtained via pre-training to the student through distillation, it is a priori unclear whether pre-training the student would bring additional benefits. As Section 6.2 shows, pre-training students is surprisingly important, even when millions of samples are available for transfer.
4 COMPARISON TO CONCURRENT WORK
There are concurrent efforts to ours aiming to leverage both pre-training and distillation in the context of building compact models. Though inspired by the two-stage pre-training+fine-tuning approach that enabled deep-and-wide architectures to advance the state-of-the-art in language understanding, they depart from this traditional method in several key ways.
Patient Knowledge Distillation (Sun et al., 2019a) initializes a student from the bottom layers of a deeper pre-trained model, then performs task-specific patient distillation. The training objective relies not only on the teacher output, but also on its intermediate layers, thus making assumptions about the student and teacher architectures. In a parallel line of work, DistilBert (Sanh, 2019) applies the same truncation-based initialization method for the student, then continues its LM pre-training via distillation from a more expensive LM teacher, and finally fine-tunes on task data. Its downside is that LM distillation is computationally expensive, as it requires a softmax operation over the entire vocabulary to compute the expensive LM teacher’s predictive distribution. A common limitation in both studies is that the initialization strategy constrains the student to the teacher embedding size. Table 2 summarizes the differences between concurrent work and Pre-trained Distillation (PD).
To facilitate direct comparison, in this section we perform an experiment with the same model architecture, sizes and dataset settings used in the two studies mentioned above. We perform Pretrained Distillation on a 6-layer BERT student with task supervision from a 12-layer BERTBASE teacher, using embedding size 768 for both models. For distillation, our transfer set coincides with
the labeled set (DT =DL). Table 3 reports results on the 6 GLUE tasks selected by Sun et al. (2019a) and shows that, on average, PD performs best. For anchoring, we also provide quality numbers for pre-training+fine-tuning (PF), which is surprisingly competitive to the more elaborate alternatives in this setting where DT is not larger than DL. Remarkably, PF does not compromise generality or simplicity for quality. Its downside is, however, that it cannot leverage unlabeled task data and teacher model predictions.
5 ANALYSIS SETTINGS
Given these positive results, we aim to gain more insight into Pre-trained Distillation. We perform extensive analyses on two orthogonal axes—model sizes and properties of unlabeled data, thus departing from the settings used in Section 4.
All our models follow the Transformer architecture (Vaswani et al., 2017) and input processing used in BERT (Devlin et al., 2018). We denote the number of hidden layers as L and the hidden embedding size as H , and refer to models by their L/H dimensions. We always fix the number of self-attention heads to H/64 and the feed-forward/filter size to 4H . The end-task models are obtained by stacking a linear classifier on top of the Transformer architectures.
The teacher, BERTLARGE, has dimensions 24L/1024H and 340M parameters. We experiment with 24 student models, with sizes and relative latencies listed in Table 1. The most expensive student, TransformerBASE, is 3 times smaller and 1.25 times faster than the teacher; the cheapest student, TransformerSMALL, is 77 times smaller and 65 times faster. For readability, we report results on a selection of 5 students, but verify that all conclusions hold across the entire 24-model grid.
5.1 ANALYSIS BASELINES
We select three baselines for Pre-trained Distillation that can provide insights into the contributions made by each of its constituent operations.
Basic Training (Figure 4a) is the standard supervised learning method: a compact model is trained directly on the labeled set.
Knowledge Distillation (Figure 4b) (Bucilă et al., 2006; Hinton et al., 2015) (or simply “distillation”) transfers information from a highly-parameterized and accurate teacher model to a more compact and thus less expressive student. For classification tasks, distillation exposes the student to soft labels, namely the class probabilities produced by the teacher pl = softmax(zl/T ), where pl is the output probability for class l, zl is the logit for class l, and T is a constant called temperature that controls the smoothness of the output distribution. The softness of the labels enables better generalization than the gold hard labels. For each end task, we train: (i) a teacher obtained by fine-tuning pre-trained BERTLARGE (24L/1024H) on the labeled dataset (note teachers do not learn from the transfer set), and (ii) 24 students of various sizes. Students are always distilled on the soft labels produced by the teacher with a temperature of 12.
Pre-training+Fine-tuning (Figure 4c) (Dai & Le, 2015; Devlin et al., 2018), or simply PF, leverages large unlabeled general-domain corpora to pre-train models that can be fine-tuned for end tasks.
2While Hinton et al. (2015) show that tuning the temperature could increase performance, we did not observe notable gains. They also propose using a weighted sum of the soft and hard labels, but this approach cannot be applied directly in our set-up, since not all instances in our unlabeled transfers set have hard labels. Our optional final fine-tuning step similarly up-weights the hard labels in the labeled set.
Following BERT, we perform pre-training with the masked LM (MLM) and next sentence objectives (collectively referred to as MLM+ from here on). The resulting model is fine-tuned on end-task labeled data. While pre-training large models has been shown to provide substantial benefits, we are unaware of any prior work systematically studying its effectiveness on compact architectures.
5.2 ANALYSIS TASKS AND DATASETS
The tasks and associated datasets are summarized in Table 4.
Sentiment classification aims to classify text according to the polarities of opinions it contains. We perform 3-way document classification on Amazon Book Reviews (He & McAuley, 2016). Its considerable size (8m) allows us to closely follow the standard distillation setting, where there is a large number of unlabeled examples for transfer. Additionally, we test our algorithm on SST-2 (Socher et al., 2013),
which is a binary sentence classification task, and our results are directly comparable with prior work on the GLUE leaderboard (Wang et al., 2018). We use whole documents from Amazon Movie Reviews (1.7m) as unlabeled transfer data (note that SST-2 consists of single sentences).
Natural language inference involves classifying pairs of sentences (a premise and a hypothesis) as entailment, contradiction, or neutral. This task is representative of the scenario in which proxy data is non-trivial to gather (Gururangan et al., 2018). We chose MNLI (Williams et al., 2018) as our target dataset. Since strictly in-domain data is difficult to obtain, we supplement DT with two other sentence-pair datasets: SNLI (Bowman et al., 2015) and QQP (Chen et al., 2018).
Textual entailment is similar to NLI, but restricted to binary classification (entailment vs nonentailment). The most popular RTE dataset (Bentivogli et al., 2009) is two orders of magnitude smaller than MNLI and offers an extreme test of robustness to the amount of transfer data.
6 ANALYSIS
In this section, we conduct experiments that help us understand why Pre-trained Distillation is successful and how to attribute credit to its constituent operations.
6.1 THERE ARE NO SHORTCUTS: WHY FULL PRE-TRAINING IS NECESSARY
As later elaborated in Section 7, earlier efforts to leverage pre-training in the context of compact models simply feed pre-trained (possibly contextual) input representations into randomly-initialized students (Hu et al., 2018; Chia et al., 2018; Tang et al., 2019). Concurrent work initializes shallowand-wide students from the bottom layers of their deeper pre-trained counterparts (Yang et al., 2019a; Sun et al., 2019a). The experiments below indicate these strategies are suboptimal, and that LM pre-training is necessary in order to unlock the full student potential.
Is it enough to pre-train word embeddings? No. In order to prove that pre-training Transformer layers is important, we compare two flavors of Pre-trained Distillation3: PD with pre-trained word embeddings and PD with pre-trained word embeddings and Transformer layers. We produce wordpiece embeddings by pre-training one-layer Transformers for each embedding size. We then discard the single Transformer layer and keep the embeddings to initialize our students.
For MNLI (Figure 5), less than 24% of the gains PD brings over distillation can be attributed to the pre-trained word embeddings (for TransformerTINY, this drops even lower, to 5%). The rest of the benefits come from additionally pre-training the Transformer layers.
3Note that, for both flavors of PD, none of the student parameters are frozen; the word embeddings do get updated during distillation.
Tiny 2L/128H Mini 4L/256H Small 4L/512H Medium 8L/512H Base 12L/768H
60
70
80
MNLI
PD (MLM pre-training) PD (from 12-layer model) PD (word embeddings) Distillation
Figure 5: Pre-training outperforms truncation. Students initialized via LM pre-training (green) outperform those initialized from the bottom layers of 12-layer pre-trained models (gray). When only word embeddings are pre-trained (red), performance is degraded even further.
Is it worse to truncate deep pre-trained models? Yes, especially for shallow students. Given that pre-training is an expensive process, an exhaustive search over model sizes in the pursuit of the one that meets a certain performance threshold can be impractical. Instead of pre-training all (number of layers, embedding size) combinations of students, one way of short-cutting the process is to pre-train a single deep (e.g. 12-layer) student for each embedding size, then truncate it at various heights. Figure 5 shows that this can be detrimental especially to shallow architectures; TransformerTINY loses more than 73% of the pre-training gains over distillation. As expected, losses fade away as the number of layers increases.
What is the best student for a fixed parameter size budget? As a rule of thumb, prioritize depth over width, especially with pre-trained students. Figure 6 presents a comparison between 24 student model architectures on SST-2, demonstrating how well different students utilize model capacity. They are sorted first by the hidden size, then by the number of layers. This roughly corresponds to a monotonic increase in the number of parameters, with a few exceptions for the largest students. The quality of randomly initialized students (i.e. basic training and distillation) is closely correlated with the number of parameters. With pre-training (i.e. PD and PF), we observe two intuitive findings: (1) pre-trained models are much more effective at using more parameters, and (2) pre-trained models are particularly efficient at utilizing depth, as indicated by the sharp drops in performance when moving to wider but shallower models.
This is yet another argument against initialization via truncation: for instance, truncating the bottom two layers of BERTBASE would lead to a suboptimal distribution of parameters: the 2L/768H model (39.2m parameters) is dramatically worse than e.g. 6L/512H (35.4m parameters).
6.2 UNDER THE HOOD: DISSECTING PRE-TRAINED DISTILLATION
In the previous section, we presented empirical evidence for the importance of the initial LM pretraining step. In this section, we show that distillation brings additional value, especially in the presence of a considerably-sized transfer set, and that fine-tuning ensures robustness when the unlabeled data diverges from the labeled set.
Comparison to analysis baselines First, we quantify how much Pre-trained Distillation improves upon its constituent operations applied in isolation. We compare it against the baselines established in Section 5.1 (basic training, distillation, and pre-training+fine-tuning) on the three NLP tasks
described in Section 5.2. We use the BookCorpus (Zhu et al., 2015) and English Wikipedia as our unlabeled LM set, following the same pre-training procedure as Devlin et al. (2018).
Results in Figure 7 confirm that PD outperforms these baselines, with particularly remarkable results on the Amazon Book Reviews corpus, where TransformerMINI recovers the accuracy of the teacher at a 31x decrease in model size and 16x speed-up. Distillation achieves the same performance with TransformerBASE, which is 10x larger than TransformerMINI. Thus PD can compress the model more effectively than distillation. On RTE, Pre-trained Distillation improves TransformerTINY by more than 5% absolute over the closest baseline (pre-training+fine-tuning) and is the only method to recover teacher accuracy with TransformerBASE.
It is interesting to note that the performance of the baseline systems is closely related to the size of the transfer set. For the sentence-pair tasks such as MNLI and RTE, where the size of the transfer set is moderate (1.3m) and slightly out-of-domain (see Table 4), pre-training+fine-tuning out-performs distillation across all student sizes, with an average of 12% for MNLI and 8% on RTE. Interestingly, the order is inverted on Amazon Book Reviews, where the large transfer set (8m) is strictly indomain: distillation is better than pre-training+fine-tuning by an average of 3%. On the other hand, Pre-trained Distillation is consistently best in all cases. We will examine the robustness of Pre-trained Distillation in the rest of the section.
Robustness to transfer set size It is generally accepted that distillation is reliant upon a large transfer set. For instance, distillation for speech recognition is performed on hundreds of millions of data points (Li et al., 2014; Hinton et al., 2015).
We reaffirm this statement through experiments on Amazon Book Reviews in Figure 8, given that Amazon Book Reviews have the biggest transfer set. Distillation barely recovers teacher accuracy with the largest student (TransformerBASE), using the entire 8m transfer set. When there is only 1m transfer set, the performance is 4% behind the teacher model. In contrast, PD achieves the same performance with TransformerMINI on 5m instances. In other words, PD can match the teacher model with 10x smaller model and 1.5x less transfer data, compared to distillation.
Robustness to domain shift To the best of our knowledge, there is no prior work that explicitly studies how distillation is impacted by the mismatch between training and transfer sets (which we will refer to as domain shift). Many previous distillation efforts focus on tasks where the two sets come from the same distribution (Romero et al., 2014; Hinton et al., 2015), while others simply acknowledge the importance of and strive for a close match between them (Bucilă et al., 2006).
We provide empirical evidence that out-of-domain data degrades distillation and that our algorithm is more robust to mismatches between DL and DT . We measure domain shift using the Spearman rank correlation coefficient (which we refer to as Spearman or simply S), introduced as a general metric in (Spearman, 1904) and first used as a corpus similarity metric in (Johansson et al., 1989). To compute corpus similarity, we follow the procedure described in (Kilgarriff & Rose, 1998): for two datasets X and Y , we compute the corresponding frequency ranks FX and FY of their most
common n = 100 words. For each of these words, the difference d between ranks in FX and FY is computed. The final statistic is given by the following formula: 1− ∑100 i=1 d 2 i /(n(n
2 − 1)). To measure the effect of domain shift, we again experiment on the Amazon Book Reviews task. Instead of varying the size of the transfer sets, this time we keep size fixed (to 1.7m documents) and vary the source of the unlabeled text used for distillation. Transfer set domains vary from not task-related (paragraphs from Wikipedia with S=0.43), to reviews for products of unrelated category (electronics reviews with S=0.52), followed by reviews from a related category (movie reviews with S=0.76), and finally in-domain book reviews (S=1.0). Results in Figure 9 show a direct correlation between accuracy and the Spearman coefficient for both distillation and PD. When S drops to 0.43, distillation on DT is 1.8% worse than basic training on DL, whereas PD suffers a smaller loss over pre-training+fine-tuning, and a gain of about 1.5% when a final fine-tuning step is added. When reviews from an unrelated product are used as a transfer set (S=0.52), PD obtains a much larger gain from learning from the teacher, compared to distillation.
6.3 BETTER TOGETHER: THE COMPOUND EFFECT OF PRE-TRAINING AND DISTILLATION
We investigate the interaction between pretraining and distillation by applying them sequentially on the same data. We compare the following two algorithms: Pre-training+Finetuning with DLM = X and Pre-trained Distillation with DLM = DT = X . Any additional gains that the latter brings over the former must be attributed to distillation, providing evidence that the compound effect still exists.
For MNLI, we setDLM =DT = NLI* and continue the experiment above by taking the students pre-trained on DLM = NLI* and distilling them on DT = NLI*. As shown in Figure 10, PD is better than PF by 2.2% on average over all student sizes. Note that even when pretraining and then distilling on the same data, PD outperforms the two training strategies applied in isolation. The two methods are thus
learning different linguistic aspects, both useful for the end task.
7 RELATED WORK
Pre-training Decades of research have shown that unlabeled text can help learn language representations. Word embeddings were first used (Mikolov et al., 2013; Pennington et al., 2014), while subsequently contextual word representations were found more effective (Peters et al., 2018). Most recently, research has shifted towards fine-tuning methods (Radford et al., 2018; Devlin et al., 2018; Radford et al., 2019), where entire large pre-trained representations are fine-tuned for end tasks together with a small number of task-specific parameters. While feature-based unsupervised representations have been successfully used in compact models (Johnson & Zhang, 2015; Gururangan et al., 2019), inter alia, the pretraining+fine-tuning approach has not been studied in depth for such small models.
Learning compact models In this work we built on model compression (Bucilă et al., 2006) and its variant knowledge distillation (Hinton et al., 2015). Other related efforts introduced ways to transfer more information from a teacher to a student model, by sharing intermediate layer activations (Romero et al., 2014; Yim et al., 2017; Sun et al., 2019a). We experimented with related approaches, but found only slight gains which were dominated by the gains from pre-training and were not complementary. Prior works have also noted the unavailability of in-domain large-scale transfer data and proposed the use of automatically generated pseudo-examples (Bucilă et al., 2006; Kimura et al., 2018). Here we showed that large-scale general domain text can be successfully used for pre-training instead. A separate line of work uses pruning or quantization to derive smaller models (Han et al., 2016; Gupta et al., 2015). Gains from such techniques are expected to be complementary to PD.
Distillation with unsupervised pre-training Early efforts to leverage both unsupervised pretraining and distillation provide pre-trained (possibly contextual) word embeddings as inputs to students, rather than pre-training the student stack. For instance, Hu et al. (2018) use ELMo embeddings, while (Chia et al., 2018; Tang et al., 2019) use context-independent word embeddings. Concurrent work initializes Transformer students from the bottom layers of a 12-layer BERT model (Yang et al., 2019a; Sun et al., 2019a; Sanh, 2019). The latter continues student LM pre-training via distillation from a more expensive LM teacher. For a different purpose of deriving a single model for multiple tasks through distillation, Clark et al. (2019) use a pre-trained student model of the same size as multiple teacher models. However, none of the prior work has analyzed the impact of unsupervised learning for students in relation to the model size and domain of the transfer set.
8 CONCLUSION
We conducted extensive experiments to gain understanding of how knowledge distillation and the pre-training+fine-tuning algorithm work in isolation, and how they interact. We made the finding that their benefits compound, and unveiled the power of Pre-trained Distillation, a simple yet effective method to maximize the utilization of all available resources: a powerful teacher, and multiple sources of data (labeled sets, unlabeled transfer sets, and unlabeled LM sets). | 1. What is the main contribution of the paper regarding knowledge distillation?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the thoroughness and quality of the presented experiments?
4. Are there any concerns or suggestions for improving the paper's content or methodology? | Review | Review
This submission revisits the student-teacher paradigm and shows through extensive experiments that pre-training a student directly on masked language modeling is better than distillation (from scratch). It also shows that the best is to combine both and distill from that pre-trained student model.
My rating is Weak Accept. I think the submission highlights a very useful observation about knowledge distillation that I imagine is overlooked by many researchers and practitioners. The decision of Weak as opposed to a Strong accept is because the submission does not introduce anything truly novel, but simply points out observations and offers a recommended training strategy. However, I do argue for its acceptance, because it does a thorough job and presents many interesting findings that can benefit the community.
Comparison with prior work:
The submission focuses on comparison with Sun et al. and Sanh. These comparisons are important, but not the most compelling part of the paper. Comparison with more prior work that show large benefits would make the paper even stronger.
Interesting experiments:
The paper presents many interesting experiments useful for anyone trying to develop a compressed model. First, it shows that distillation (from scratch) by itself may be overrated, since simply repeating the pre-training+fine-tuning procedure on the small model directly is effective. However, distillation remains relevant since it also shows that pre-training the student, then distilling against a teacher, is a potent combination. In the case when the transfer set is the same size as the pre-training set, it surprisingly still has some benefits. This is not experimentally explained, but I suspect there are optimization benefits that are hard to pin down exactly. The paper hypothesizes that the two methods learn different “linguistic aspects,” but I think it is a bit too speculative to put it in such terms.
The experiments are thorough, with many student sizes, transfer set sizes, transfer set/task set correlation, etc. It also compares against the truncation technique, where the student is initialized with a truncated version of the teacher. There are no error bars in the plots, but there are so many plots with clear trends, that this is not a big concern. I can’t think of any experiments that are obviously missing.
Misc:
- The introduction says that the pre-training+fine-tuning baseline has been overlooked. It would be great to point out papers that has actually overlooked this baseline. Including this in the results would be even better.
- During my first read-through, I got confused because I didn’t realize “pre-training” in most of the paper refers to “student pre-training” (as opposed to simply training the teacher). Making this a bit more explicit here and there can avoid this confusion. |
ICLR | Title
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Abstract
Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training. Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019a; Sanh, 2019). However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked. In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation. The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements. Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data. One surprising observation is that they have a compound effect even when sequentially applied on the same data. To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available.
1 INTRODUCTION
Self-supervised learning on a general-domain text corpus followed by end-task learning is the twostaged training approach that enabled deep-and-wide Transformer-based networks (Vaswani et al., 2017) to advance language understanding (Devlin et al., 2018; Yang et al., 2019b; Sun et al., 2019b; Liu et al., 2019). However, state-of-the-art models have hundreds of millions of parameters, incurring a high computational cost. Our goal is to realize their gains under a restricted memory and latency budget. We seek a training method that is well-performing, general and simple and can leverage additional resources such as unlabeled task data.
Before considering compression techniques, we start with the following research question: Could we directly train small models using the same two-staged approach? In other words, we explore the idea of applying language model (LM) pre-training and task fine-tuning to compact architectures directly. This simple baseline has so far been overlooked by the NLP community, potentially based on an underlying assumption that the limited capacity of compact models is capitalized better when focusing on the end task rather than a general language model objective. Concurrent work to ours proposes variations of the standard pre-training+fine-tuning procedure, but with limited generality (Sun et al., 2019a; Sanh, 2019). We make the surprising finding that pre-training+fine-tuning in its original formulation is a competitive method for building compact models.
For further gains, we additionally leverage knowledge distillation (Hinton et al., 2015), the standard technique for model compression. A compact student is trained to recover the predictions of a highly accurate teacher. In addition to the posited regularization effect of these soft labels (Hinton et al., 2015), distillation provides a means of producing pseudo-labels for unlabeled data. By regarding LM pre-training of compact models as a student initialization strategy, we can take advantage of both methods. The resulting algorithm is a sequence of three standard training operations: masked LM (MLM) pre-training (Devlin et al., 2018), task-specific distillation, and optional fine-tuning. From here on, we will refer to it as Pre-trained Distillation (PD) (Figure 1). As we will show in
Algorithm 1 Require: student θ, teacher Ω, unlabeled LM data DLM , unlabeled transfer dataDT , labeled dataDL
1: Initialize θ by pre-training an MLM+ on DLM 2: for each x ∈ DT do 3: Get loss L← − ∑ y PΩ(y|x) logPθ(y|x) 4: Update student θ ← BACKPROP(L, θ) 5: end for 6: Fine-tune θ on DL . Optional step. 7: return θ
Pre-training
Distillation
Fine-tuning (Optional)
Compact Model
Unlabeled LM data
Unlabeled transfer data
Labeled data
Large teacher
Final Compact Model
Figure 1: Pre-trained Distillation
Section 6.2, PD outperforms the pre-training+fine-tuning (PF) baseline, especially in the presence of a large transfer set for distillation.
In a controlled study following data and model architecture settings in concurrent work (Section 4), we show that Pre-trained Distillation outperforms or is competitive with more elaborate approaches which use either more sophisticated distillation of task knowledge (Sun et al., 2019a) or more sophisticated pre-training from unlabeled text (Sanh, 2019). The former distill task knowledge from intermediate teacher activations, starting with a heuristically initialized student. The latter fine-tune a compact model that is pre-trained on unlabeled text with the help of a larger LM teacher.
One of the most noteworthy contributions of our paper are the extensive experiments that examine how Pre-trained Distillation and its baselines perform under various conditions. We investigate two axes that have been under-studied in previous work: model size and amount/quality of unlabeled data. While experimenting with 24 models of various sizes (4m to 110m parameters) and depth/width trade-offs, we observe that pre-trained students can leverage depth much better than width; in contrast, this property is not visible for randomly-initialized models. For the second axis, we vary the amount of unlabeled data, as well as its similarity to the labeled set. Interestingly, Pretrained Distillation is more robust to these variations in the transfer set than standard distillation.
Finally, in order to gain insight into the interaction between LM pre-training and task-specific distillation, we sequentially apply these operations on the same dataset. In this experiment, chaining the two operations performs better than any one of them applied in isolation, despite the fact that a single dataset was used for both steps. This compounding effect is surprising, indicating that pre-training and distillation are learning complementary aspects of the data.
Given the effectiveness of LM pre-training on compact architectures, we will make our 24 pretrained miniature BERT models publicly available in order to accelerate future research.
2 PROBLEM STATEMENT
Our high-level goal is to build accurate models which fit a given memory and latency budget. There are many aspects to explore: the parametric form of the compact model (architecture, number of parameters, trade-off between number of hidden layers and embedding size), the training data (size, distribution, presence or absence of labels, training objective), etc. Since an exhaustive search over this space is impractical, we fix the model architecture to bidirectional Transformers, known to be suitable for a wide range of NLP tasks (Vaswani et al., 2017; Devlin et al., 2018). The rest of this section elaborates on the training resources we assume to have at our disposal.
The teacher is a highly accurate but large model for an end task, that does not meet the resource constraints. Prior work on distillation often makes use of an ensemble of networks (Hinton et al., 2015). For faster experimentation, we use a single teacher, without making a statement about the best architectural choice. In Section 4, the teacher is pre-trained BERTBASE fine-tuned on labeled end-task data. In Section 6, we use BERTLARGE instead.
Students are compact models that satisfy resource constraints. Since model size qualifiers are relative (e.g., what is considered small in a data center can be impractically large on a mobile device),
we investigate an array of 24 model sizes, from our TransformerTINY (4m parameters) all the way up to TransformerBASE (110m parameters)1. The student model sizes and their relative speed-up compared to the BERTLARGE teacher can be found in Table 1. Interested readers can situate themselves on this spectrum based on their resource constraints. For readability, most plots show a selection of 5 models, but we verify that our conclusions hold for all 24.
Labeled data (DL) is a set of N training examples {(x1, y1), ..., (xN , yN )}, where xi is an input and yi is a label. For most NLP tasks, labeled sets are hard to produce and thus restricted in size.
Unlabeled transfer data (DT ) is a set ofM input examples of the form {x′1, ..., x′M} sampled from a distribution that is similar to but possibly not identical to the input distribution of the labeled set. During distillation, the teacher transfers knowledge to the student by exposing its label predictions for instances x′m. DT can also include the input portion of labeled data DL instances. Due to the lack of true labels, such sets are generally easier to produce and consequently larger than labeled ones. Note, however, that task-relevant input text is not readily available for key tasks requiring paired texts such as natural language inference and question answering, as well as domain-specific dialog understanding. In addition, for deployed systems, input data distribution shifts over time and existing unlabeled data becomes stale (Kim et al., 2017).
Unlabeled language model data (DLM ) is a collection of natural language texts that enable unsupervised learning of text representations. We use it for unsupervised pre-training with a masked language model objective (Devlin et al., 2018). Because no labels are needed and strong domain similarity is not required, these corpora are often vast, containing thousands of millions of words.
The distinction between the three types of datasets is strictly functional. Note they are not necessarily disjunct. For instance, the same corpus that forms the labeled data can also be part of the unlabeled transfer set, after its labels are discarded. Similarly, corpora that are included in the transfer set can also be used as unlabeled LM data.
3 PRE-TRAINED DISTILLATION
Pre-trained Distillation (PD) (Figure 1) is a general, yet simple algorithm for building compact models that can leverage all the resources enumerated in Section 2. It consists of a sequence of three standard training operations that can be applied to any choice of architecture:
1. Pre-training on DLM . A compact model is trained with a masked LM objective (Devlin et al., 2018), capturing linguistic phenomena from a large corpus of natural language texts.
2. Distillation on DT . This well-read student is now prepared to take full advantage of the teacher expertise, and is trained on the soft labels (predictive distribution) produced by the teacher. As we will show in Section 6.2, randomly initialized distillation is constrained by the size and distribution of its unlabeled transfer set. However, the previous pre-training step mitigates to some extent the negative effects caused by an imperfect transfer set.
3. (Optional) fine-tuning on DL. This step makes the model robust to potential mismatches between the distribution of the transfer and labeled sets. We will refer to the two-step algorithm as PD, and to the three-step algorithm as PDF.
1Note that our TransformerBASE and BERTBASE in Devlin et al. (2018) have the same architecture. We use the former term for clarity, since not all students in Section 6 are pre-trained.
Figure 3: Pre-trained Distillation (PD) and concurrent work on model compression.
While we are treating our large teachers as black boxes, it is worth noting that they are produced by pre-training and fine-tuning. Since the teacher could potentially transfer the knowledge it has obtained via pre-training to the student through distillation, it is a priori unclear whether pre-training the student would bring additional benefits. As Section 6.2 shows, pre-training students is surprisingly important, even when millions of samples are available for transfer.
4 COMPARISON TO CONCURRENT WORK
There are concurrent efforts to ours aiming to leverage both pre-training and distillation in the context of building compact models. Though inspired by the two-stage pre-training+fine-tuning approach that enabled deep-and-wide architectures to advance the state-of-the-art in language understanding, they depart from this traditional method in several key ways.
Patient Knowledge Distillation (Sun et al., 2019a) initializes a student from the bottom layers of a deeper pre-trained model, then performs task-specific patient distillation. The training objective relies not only on the teacher output, but also on its intermediate layers, thus making assumptions about the student and teacher architectures. In a parallel line of work, DistilBert (Sanh, 2019) applies the same truncation-based initialization method for the student, then continues its LM pre-training via distillation from a more expensive LM teacher, and finally fine-tunes on task data. Its downside is that LM distillation is computationally expensive, as it requires a softmax operation over the entire vocabulary to compute the expensive LM teacher’s predictive distribution. A common limitation in both studies is that the initialization strategy constrains the student to the teacher embedding size. Table 2 summarizes the differences between concurrent work and Pre-trained Distillation (PD).
To facilitate direct comparison, in this section we perform an experiment with the same model architecture, sizes and dataset settings used in the two studies mentioned above. We perform Pretrained Distillation on a 6-layer BERT student with task supervision from a 12-layer BERTBASE teacher, using embedding size 768 for both models. For distillation, our transfer set coincides with
the labeled set (DT =DL). Table 3 reports results on the 6 GLUE tasks selected by Sun et al. (2019a) and shows that, on average, PD performs best. For anchoring, we also provide quality numbers for pre-training+fine-tuning (PF), which is surprisingly competitive to the more elaborate alternatives in this setting where DT is not larger than DL. Remarkably, PF does not compromise generality or simplicity for quality. Its downside is, however, that it cannot leverage unlabeled task data and teacher model predictions.
5 ANALYSIS SETTINGS
Given these positive results, we aim to gain more insight into Pre-trained Distillation. We perform extensive analyses on two orthogonal axes—model sizes and properties of unlabeled data, thus departing from the settings used in Section 4.
All our models follow the Transformer architecture (Vaswani et al., 2017) and input processing used in BERT (Devlin et al., 2018). We denote the number of hidden layers as L and the hidden embedding size as H , and refer to models by their L/H dimensions. We always fix the number of self-attention heads to H/64 and the feed-forward/filter size to 4H . The end-task models are obtained by stacking a linear classifier on top of the Transformer architectures.
The teacher, BERTLARGE, has dimensions 24L/1024H and 340M parameters. We experiment with 24 student models, with sizes and relative latencies listed in Table 1. The most expensive student, TransformerBASE, is 3 times smaller and 1.25 times faster than the teacher; the cheapest student, TransformerSMALL, is 77 times smaller and 65 times faster. For readability, we report results on a selection of 5 students, but verify that all conclusions hold across the entire 24-model grid.
5.1 ANALYSIS BASELINES
We select three baselines for Pre-trained Distillation that can provide insights into the contributions made by each of its constituent operations.
Basic Training (Figure 4a) is the standard supervised learning method: a compact model is trained directly on the labeled set.
Knowledge Distillation (Figure 4b) (Bucilă et al., 2006; Hinton et al., 2015) (or simply “distillation”) transfers information from a highly-parameterized and accurate teacher model to a more compact and thus less expressive student. For classification tasks, distillation exposes the student to soft labels, namely the class probabilities produced by the teacher pl = softmax(zl/T ), where pl is the output probability for class l, zl is the logit for class l, and T is a constant called temperature that controls the smoothness of the output distribution. The softness of the labels enables better generalization than the gold hard labels. For each end task, we train: (i) a teacher obtained by fine-tuning pre-trained BERTLARGE (24L/1024H) on the labeled dataset (note teachers do not learn from the transfer set), and (ii) 24 students of various sizes. Students are always distilled on the soft labels produced by the teacher with a temperature of 12.
Pre-training+Fine-tuning (Figure 4c) (Dai & Le, 2015; Devlin et al., 2018), or simply PF, leverages large unlabeled general-domain corpora to pre-train models that can be fine-tuned for end tasks.
2While Hinton et al. (2015) show that tuning the temperature could increase performance, we did not observe notable gains. They also propose using a weighted sum of the soft and hard labels, but this approach cannot be applied directly in our set-up, since not all instances in our unlabeled transfers set have hard labels. Our optional final fine-tuning step similarly up-weights the hard labels in the labeled set.
Following BERT, we perform pre-training with the masked LM (MLM) and next sentence objectives (collectively referred to as MLM+ from here on). The resulting model is fine-tuned on end-task labeled data. While pre-training large models has been shown to provide substantial benefits, we are unaware of any prior work systematically studying its effectiveness on compact architectures.
5.2 ANALYSIS TASKS AND DATASETS
The tasks and associated datasets are summarized in Table 4.
Sentiment classification aims to classify text according to the polarities of opinions it contains. We perform 3-way document classification on Amazon Book Reviews (He & McAuley, 2016). Its considerable size (8m) allows us to closely follow the standard distillation setting, where there is a large number of unlabeled examples for transfer. Additionally, we test our algorithm on SST-2 (Socher et al., 2013),
which is a binary sentence classification task, and our results are directly comparable with prior work on the GLUE leaderboard (Wang et al., 2018). We use whole documents from Amazon Movie Reviews (1.7m) as unlabeled transfer data (note that SST-2 consists of single sentences).
Natural language inference involves classifying pairs of sentences (a premise and a hypothesis) as entailment, contradiction, or neutral. This task is representative of the scenario in which proxy data is non-trivial to gather (Gururangan et al., 2018). We chose MNLI (Williams et al., 2018) as our target dataset. Since strictly in-domain data is difficult to obtain, we supplement DT with two other sentence-pair datasets: SNLI (Bowman et al., 2015) and QQP (Chen et al., 2018).
Textual entailment is similar to NLI, but restricted to binary classification (entailment vs nonentailment). The most popular RTE dataset (Bentivogli et al., 2009) is two orders of magnitude smaller than MNLI and offers an extreme test of robustness to the amount of transfer data.
6 ANALYSIS
In this section, we conduct experiments that help us understand why Pre-trained Distillation is successful and how to attribute credit to its constituent operations.
6.1 THERE ARE NO SHORTCUTS: WHY FULL PRE-TRAINING IS NECESSARY
As later elaborated in Section 7, earlier efforts to leverage pre-training in the context of compact models simply feed pre-trained (possibly contextual) input representations into randomly-initialized students (Hu et al., 2018; Chia et al., 2018; Tang et al., 2019). Concurrent work initializes shallowand-wide students from the bottom layers of their deeper pre-trained counterparts (Yang et al., 2019a; Sun et al., 2019a). The experiments below indicate these strategies are suboptimal, and that LM pre-training is necessary in order to unlock the full student potential.
Is it enough to pre-train word embeddings? No. In order to prove that pre-training Transformer layers is important, we compare two flavors of Pre-trained Distillation3: PD with pre-trained word embeddings and PD with pre-trained word embeddings and Transformer layers. We produce wordpiece embeddings by pre-training one-layer Transformers for each embedding size. We then discard the single Transformer layer and keep the embeddings to initialize our students.
For MNLI (Figure 5), less than 24% of the gains PD brings over distillation can be attributed to the pre-trained word embeddings (for TransformerTINY, this drops even lower, to 5%). The rest of the benefits come from additionally pre-training the Transformer layers.
3Note that, for both flavors of PD, none of the student parameters are frozen; the word embeddings do get updated during distillation.
Tiny 2L/128H Mini 4L/256H Small 4L/512H Medium 8L/512H Base 12L/768H
60
70
80
MNLI
PD (MLM pre-training) PD (from 12-layer model) PD (word embeddings) Distillation
Figure 5: Pre-training outperforms truncation. Students initialized via LM pre-training (green) outperform those initialized from the bottom layers of 12-layer pre-trained models (gray). When only word embeddings are pre-trained (red), performance is degraded even further.
Is it worse to truncate deep pre-trained models? Yes, especially for shallow students. Given that pre-training is an expensive process, an exhaustive search over model sizes in the pursuit of the one that meets a certain performance threshold can be impractical. Instead of pre-training all (number of layers, embedding size) combinations of students, one way of short-cutting the process is to pre-train a single deep (e.g. 12-layer) student for each embedding size, then truncate it at various heights. Figure 5 shows that this can be detrimental especially to shallow architectures; TransformerTINY loses more than 73% of the pre-training gains over distillation. As expected, losses fade away as the number of layers increases.
What is the best student for a fixed parameter size budget? As a rule of thumb, prioritize depth over width, especially with pre-trained students. Figure 6 presents a comparison between 24 student model architectures on SST-2, demonstrating how well different students utilize model capacity. They are sorted first by the hidden size, then by the number of layers. This roughly corresponds to a monotonic increase in the number of parameters, with a few exceptions for the largest students. The quality of randomly initialized students (i.e. basic training and distillation) is closely correlated with the number of parameters. With pre-training (i.e. PD and PF), we observe two intuitive findings: (1) pre-trained models are much more effective at using more parameters, and (2) pre-trained models are particularly efficient at utilizing depth, as indicated by the sharp drops in performance when moving to wider but shallower models.
This is yet another argument against initialization via truncation: for instance, truncating the bottom two layers of BERTBASE would lead to a suboptimal distribution of parameters: the 2L/768H model (39.2m parameters) is dramatically worse than e.g. 6L/512H (35.4m parameters).
6.2 UNDER THE HOOD: DISSECTING PRE-TRAINED DISTILLATION
In the previous section, we presented empirical evidence for the importance of the initial LM pretraining step. In this section, we show that distillation brings additional value, especially in the presence of a considerably-sized transfer set, and that fine-tuning ensures robustness when the unlabeled data diverges from the labeled set.
Comparison to analysis baselines First, we quantify how much Pre-trained Distillation improves upon its constituent operations applied in isolation. We compare it against the baselines established in Section 5.1 (basic training, distillation, and pre-training+fine-tuning) on the three NLP tasks
described in Section 5.2. We use the BookCorpus (Zhu et al., 2015) and English Wikipedia as our unlabeled LM set, following the same pre-training procedure as Devlin et al. (2018).
Results in Figure 7 confirm that PD outperforms these baselines, with particularly remarkable results on the Amazon Book Reviews corpus, where TransformerMINI recovers the accuracy of the teacher at a 31x decrease in model size and 16x speed-up. Distillation achieves the same performance with TransformerBASE, which is 10x larger than TransformerMINI. Thus PD can compress the model more effectively than distillation. On RTE, Pre-trained Distillation improves TransformerTINY by more than 5% absolute over the closest baseline (pre-training+fine-tuning) and is the only method to recover teacher accuracy with TransformerBASE.
It is interesting to note that the performance of the baseline systems is closely related to the size of the transfer set. For the sentence-pair tasks such as MNLI and RTE, where the size of the transfer set is moderate (1.3m) and slightly out-of-domain (see Table 4), pre-training+fine-tuning out-performs distillation across all student sizes, with an average of 12% for MNLI and 8% on RTE. Interestingly, the order is inverted on Amazon Book Reviews, where the large transfer set (8m) is strictly indomain: distillation is better than pre-training+fine-tuning by an average of 3%. On the other hand, Pre-trained Distillation is consistently best in all cases. We will examine the robustness of Pre-trained Distillation in the rest of the section.
Robustness to transfer set size It is generally accepted that distillation is reliant upon a large transfer set. For instance, distillation for speech recognition is performed on hundreds of millions of data points (Li et al., 2014; Hinton et al., 2015).
We reaffirm this statement through experiments on Amazon Book Reviews in Figure 8, given that Amazon Book Reviews have the biggest transfer set. Distillation barely recovers teacher accuracy with the largest student (TransformerBASE), using the entire 8m transfer set. When there is only 1m transfer set, the performance is 4% behind the teacher model. In contrast, PD achieves the same performance with TransformerMINI on 5m instances. In other words, PD can match the teacher model with 10x smaller model and 1.5x less transfer data, compared to distillation.
Robustness to domain shift To the best of our knowledge, there is no prior work that explicitly studies how distillation is impacted by the mismatch between training and transfer sets (which we will refer to as domain shift). Many previous distillation efforts focus on tasks where the two sets come from the same distribution (Romero et al., 2014; Hinton et al., 2015), while others simply acknowledge the importance of and strive for a close match between them (Bucilă et al., 2006).
We provide empirical evidence that out-of-domain data degrades distillation and that our algorithm is more robust to mismatches between DL and DT . We measure domain shift using the Spearman rank correlation coefficient (which we refer to as Spearman or simply S), introduced as a general metric in (Spearman, 1904) and first used as a corpus similarity metric in (Johansson et al., 1989). To compute corpus similarity, we follow the procedure described in (Kilgarriff & Rose, 1998): for two datasets X and Y , we compute the corresponding frequency ranks FX and FY of their most
common n = 100 words. For each of these words, the difference d between ranks in FX and FY is computed. The final statistic is given by the following formula: 1− ∑100 i=1 d 2 i /(n(n
2 − 1)). To measure the effect of domain shift, we again experiment on the Amazon Book Reviews task. Instead of varying the size of the transfer sets, this time we keep size fixed (to 1.7m documents) and vary the source of the unlabeled text used for distillation. Transfer set domains vary from not task-related (paragraphs from Wikipedia with S=0.43), to reviews for products of unrelated category (electronics reviews with S=0.52), followed by reviews from a related category (movie reviews with S=0.76), and finally in-domain book reviews (S=1.0). Results in Figure 9 show a direct correlation between accuracy and the Spearman coefficient for both distillation and PD. When S drops to 0.43, distillation on DT is 1.8% worse than basic training on DL, whereas PD suffers a smaller loss over pre-training+fine-tuning, and a gain of about 1.5% when a final fine-tuning step is added. When reviews from an unrelated product are used as a transfer set (S=0.52), PD obtains a much larger gain from learning from the teacher, compared to distillation.
6.3 BETTER TOGETHER: THE COMPOUND EFFECT OF PRE-TRAINING AND DISTILLATION
We investigate the interaction between pretraining and distillation by applying them sequentially on the same data. We compare the following two algorithms: Pre-training+Finetuning with DLM = X and Pre-trained Distillation with DLM = DT = X . Any additional gains that the latter brings over the former must be attributed to distillation, providing evidence that the compound effect still exists.
For MNLI, we setDLM =DT = NLI* and continue the experiment above by taking the students pre-trained on DLM = NLI* and distilling them on DT = NLI*. As shown in Figure 10, PD is better than PF by 2.2% on average over all student sizes. Note that even when pretraining and then distilling on the same data, PD outperforms the two training strategies applied in isolation. The two methods are thus
learning different linguistic aspects, both useful for the end task.
7 RELATED WORK
Pre-training Decades of research have shown that unlabeled text can help learn language representations. Word embeddings were first used (Mikolov et al., 2013; Pennington et al., 2014), while subsequently contextual word representations were found more effective (Peters et al., 2018). Most recently, research has shifted towards fine-tuning methods (Radford et al., 2018; Devlin et al., 2018; Radford et al., 2019), where entire large pre-trained representations are fine-tuned for end tasks together with a small number of task-specific parameters. While feature-based unsupervised representations have been successfully used in compact models (Johnson & Zhang, 2015; Gururangan et al., 2019), inter alia, the pretraining+fine-tuning approach has not been studied in depth for such small models.
Learning compact models In this work we built on model compression (Bucilă et al., 2006) and its variant knowledge distillation (Hinton et al., 2015). Other related efforts introduced ways to transfer more information from a teacher to a student model, by sharing intermediate layer activations (Romero et al., 2014; Yim et al., 2017; Sun et al., 2019a). We experimented with related approaches, but found only slight gains which were dominated by the gains from pre-training and were not complementary. Prior works have also noted the unavailability of in-domain large-scale transfer data and proposed the use of automatically generated pseudo-examples (Bucilă et al., 2006; Kimura et al., 2018). Here we showed that large-scale general domain text can be successfully used for pre-training instead. A separate line of work uses pruning or quantization to derive smaller models (Han et al., 2016; Gupta et al., 2015). Gains from such techniques are expected to be complementary to PD.
Distillation with unsupervised pre-training Early efforts to leverage both unsupervised pretraining and distillation provide pre-trained (possibly contextual) word embeddings as inputs to students, rather than pre-training the student stack. For instance, Hu et al. (2018) use ELMo embeddings, while (Chia et al., 2018; Tang et al., 2019) use context-independent word embeddings. Concurrent work initializes Transformer students from the bottom layers of a 12-layer BERT model (Yang et al., 2019a; Sun et al., 2019a; Sanh, 2019). The latter continues student LM pre-training via distillation from a more expensive LM teacher. For a different purpose of deriving a single model for multiple tasks through distillation, Clark et al. (2019) use a pre-trained student model of the same size as multiple teacher models. However, none of the prior work has analyzed the impact of unsupervised learning for students in relation to the model size and domain of the transfer set.
8 CONCLUSION
We conducted extensive experiments to gain understanding of how knowledge distillation and the pre-training+fine-tuning algorithm work in isolation, and how they interact. We made the finding that their benefits compound, and unveiled the power of Pre-trained Distillation, a simple yet effective method to maximize the utilization of all available resources: a powerful teacher, and multiple sources of data (labeled sets, unlabeled transfer sets, and unlabeled LM sets). | 1. What are the concerns regarding the effectiveness of the proposed method compared to other baselines?
2. How does the reviewer assess the complexity of the proposed method in comparison to prior works?
3. What is the hypothesis regarding the performance of randomly initialized students versus pre-trained ones?
4. Why does the reviewer believe that the mixed results in Table 3 do not adequately support the proposed approach? | Review | Review
This paper proposes to pre-train a student before training with a teacher, which is easy to understand. Although the authors provide extensive empirical studies, I do not think they can justify the claims in this paper.
** Argument
One concern is that compared to other baselines such as "Patient knowledge distillation" [1], the proposed method is not consistently better. The authors argue that [1] is more sophisticated in that they distill task knowledge from intermediate teacher activations. However, the proposed method introduces other extra complexities, such as pre-training the student. I do not agree that the proposed method is less elaborate than previous methods.
Although the investigation on influence of model size and the amount/quality of unlabeled data is interesting in itself, this does not help justify the usefulness of pre-training a student. I hypothesize that when considering the intermediate feature maps as additional training signals, randomly initialized students can catch up with pre-trained students.
Furthermore, the mixed results shown in Table 3 do not justify the proposed method well enough.
[1] Patient Knowledge Distillation for BERT Model Compression, https://arxiv.org/abs/1908.09355 |
ICLR | Title
Strength-Adaptive Adversarial Training
Abstract
Adversarial training (AT) is proved to reliably improve network’s robustness against adversarial data. However, current AT with a pre-specified perturbation budget has limitations in learning a robust network. Firstly, applying a prespecified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network’s desideratum. Secondly, the attack strength of adversarial training data constrained by the pre-specified perturbation budget fails to upgrade as the growth of network robustness, which leads to robust overfitting and further degrades the adversarial robustness. To overcome these limitations, we propose Strength-Adaptive Adversarial Training (SAAT). Specifically, the adversary employs an adversarial loss constraint to generate adversarial training data. Under this constraint, the perturbation budget will be adaptively adjusted according to the training state of adversarial data, which can effectively avoid robust overfitting. Besides, SAAT explicitly constrains the attack strength of training data through the adversarial loss, which manipulates model capacity scheduling during training, and thereby can flexibly control the degree of robustness disparity and adjust the tradeoff between natural accuracy and robustness. Extensive experiments show that our proposal boosts the robustness of adversarial training.
1 INTRODUCTION
Current deep neural networks (DNNs) achieve impressive breakthroughs on a variety of fields such as computer vision (He et al., 2016), speech recognition (Wang et al., 2017), and NLP (Devlin et al., 2018), but it is well-known that DNNs are vulnerable to adversarial data: small perturbations of the input which are imperceptible to humans will cause wrong outputs (Szegedy et al., 2013; Goodfellow et al., 2014). As countermeasures against adversarial data, adversarial training (AT) is a method for hardening networks against adversarial attacks (Madry et al., 2017). AT trains the network using adversarial data that are constrained by a pre-specified perturbation budget, which aims to obtain the output network with the minimum adversarial risk of an sample to be wrongly classified under the same perturbation budget. Across existing defense techniques, AT has been proved to be one of the most effective and reliable methods against adversarial attacks (Athalye et al., 2018).
Although promising to improve the network’s robustness, AT with a pre-specified perturbation budget still has limitations in learning a robust network. Firstly, the pre-specified perturbation budget is inadaptable for networks of various model capacities, yielding divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network’s desideratum. Ideally, for a robust network, perturbing the attack budget within a small range should not cause signifcant accuracy degradation. Unfortunately, the degree of robustness disparity is intractable for AT with a pre-specified perturbation budget. In standard AT, there could be a prominent degree of robustness disparity in output networks. For instance, a standard PGD adversarially-trained PreAct ResNet18 network has 84% natural accuracy and only 46% robust accuracy on CIFAR10 under ℓ∞ threat model, as shown in Figure 1(a). Empirically, we have to increase the pre-specified perturbation budget to allocate more model capacity for defense against adversarial attacks to mitigate the degree of robustness disparity, as shown in Figure 1(b). However, the feasible range of perturbation budget is different for networks with different model capacities. For example, AT with perturbation budget ϵ = 40/255 will make PreAct ResNet-18 optimization collapse, while wide ResNet-34-10 can learn normally. In order to maintain a steady degree of robustness disparity, we have to find separate perturbation budgets for each network with different model capacities. Therefore, it may be pessimistic to use AT with a pre-specified perturbation budget to learn a robust network.
Secondly, the attack strength of adversarial training data constrained by the pre-specified perturbation budget is gradually weakened as the growth of network robustness. During the training process, adversarial training data are generated on the fly and are changed based on the updating of the network. As the the network’s adversarial robustness continues to increase, the attack strength of adversarial training data with the pre-specified perturbation budget is getting relatively weaker. Given the limited network capacity, a degenerate or stagnant adversary accompanied by an evolving network will easily cause training bias: adversarial training is more inclined to the defense against weak strength attacks, and thereby erodes defenses on strong strength attacks, leading to the undesirable robust overfitting, as shown in Figure 1(c). Moreover, compared with the “best” checkpoint in AT with robust overfitting, the “last” checkpoint’s defense advantage in weak strength attack is slight, while its defense disadvantage in strong strength attack is significant, which indicates that robust overfitting not only exacerbates the degree of robustness disparity, but also further degrades the adversarial robustness. Thus, it may be deficient to use adversarial data with a pre-specified perturbation budget to train a robust network.
To overcome these limitations, we propose strength-adaptive adversarial training (SAAT), which employs an adversarial loss constraint to generate adversarial training data. The adversarial perturbation generated under this constraint is adaptive to the dynamic training schedule and networks of various model capacities. Specifically, as adversarial training progresses, a larger perturbation budget is required to satisfy the adversarial loss constraint since the network becomes more robust. Thus, the perturbation budgets in our SAAT is adaptively adjusted according to the training state of adversarial data, which restrains the training bias and effectively avoids robust overfitting. Besides, SAAT explicitly constrains the attack strength of training data by the adversarial loss constraint, which guides model capacity scheduling in adversarial training, and thereby can flexibly adjust the tradeoff between natural accuracy and robustness, ensuring that the output network maintains a steady degree of robustness disparity even under networks with different model capacities.
Our contributions are as follows. (a) In standard AT, we characterize the pessimism of adversary with a pre-specified perturbation budget, which is due to the intractable robustness disparity and undesirable robust overfitting (in Section 3.1). (b) We propose a new adversarial training method, i.e., SAAT (its learning objective in Section 3.2 and its realization in Section 3.3). SAAT is a general adversarial training method that can be easily converted to natural training or standard AT. (c) Empirically, we find that adversarial training loss is well-correlated with the degree of robustness disparity and robust generalization gap (in Section 4.2), which enables our SAAT to overcome the issue of robust overfitting and flexibly adjust the tradeoff of adversarial training, leading to the improved natural accuracy and robustness (in Section 4.3).
2 PRELIMINARY AND RELATED WORK
In this section, we review the adversarial training method and related works.
2.1 ADVERSARIAL TRAINING
Learning objective. Let fθ, X and ℓ be the network f with trainable model parameter θ, input feature space, and loss function, respectively. Given a C-class dataset S = {(xi, yi)}ni=1, where xi ∈ X and yi ∈ Y = {0, 1, ..., C − 1} as its associated label. In natural training, most machine
learning tasks could be formulated as solving the following optimization problem:
min θ
1
n n∑ i=1 ℓ(fθ(xi), yi). (1)
The learning objective of natural training is to obtain the networks that have the minimum empirical risk of a natural input to be wrongly classified. In adversarial training, the adversary adds the adversarial perturbation to each sample, i.e., transform S = {(xi, yi)}ni=1 to S ′ = {(x′i = xi + δi, yi)}ni=1. The adversarial perturbation {δi}ni=1 are constrained by a pre-specified budget, i.e. {δ ∈ ∆ : ||δ||p ≤ ϵ}, where p can be 1, 2,∞, etc. In order to defend such attack, standard adversarial training (AT) (Madry et al., 2017) resort to solve the following objective function:
min θ
1
n n∑ i=1 max δi∈∆ ℓ(fθ(xi + δi), yi). (2)
Note that the outer minimization remains the same as Eq.(1), and the inner maximization operator can also be re-written as
δi = argmax δi∈∆ ℓ(fθ(xi + δi), yi), (3)
where x′i = xi + δi is the most adversarial data within the perturbation budget ∆. Standard AT employs the most adversarial data generated according to Eq.(3) for updating the current model. The learning objective of standard AT is to obtain the networks that have the minimum adversarial risk of a input to be wrongly classified under the pre-specified perturbation budget.
Realizations. The objective functions of standard AT (Eq.(2)) is a composition of an inner maximization problem and an outer minimization problem, with one step generating adversarial data and one step minimizing loss on the generated adversarial data w.r.t. the model parameters θ. For the outer minimization problem, Stochastic Gradient Descent (SGD) (Bottou, 1999) and its variants are widely used to optimize the model parameters (Rice et al., 2020). For the inner maximization problem, the Projected Gradient Descent (PGD) (Madry et al., 2017) is the most common approximation method for generating adversarial perturbation, which can be viewed as a multi-step variant of Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014). Given normal example x ∈ X and step size α > 0, PGD works as follows:
δk+1 = Π∆(α · sign∇xℓ(f(x+ δk), y) + δk), k ∈ N, (4) where δk is adversarial perturbation at step k; and Π∆ is the projection function that project the adversarial perturbation back into the pre-specified budget ∆ if necessary.
2.2 RELATED WORK
Stopping criteria. There are different stopping criteria for PGD-based adversarial training. For example, standard AT (Madry et al., 2017) employs a fixed number of iterations K, namely PGD-K, which is commonly used in many outstanding adversarial training variants, such as TRADES (Zhang et al., 2019), MART (Wang et al., 2019b), and RST (Carmon et al., 2019). Besides, some works have further enhanced the PGD-K method by incorporating additional optimization mechanisms, such as curriculum learning (Cai et al., 2018), FOSC (Wang et al., 2019a), and geometry reweighting (Zhang et al., 2020b). On the other hand, some works adopt different PGD stopping criterion, i.e., misclassification-aware criterion, which stops the iterations once the network misclassifies the adversarial data. This misclassification-aware criterion is widely used in the emerging adversarial training variants, such as FAT (Zhang et al., 2020a), MMA (Ding et al., 2018), IAAT (Balaji et al., 2019), ATES (Sitawarin et al., 2020), and Customized AT (Cheng et al., 2020). Different from these works, we propose strength-adaptive PGD (SA-PGD) that uses the minimum adversarial loss as the stopping criterion to generate efficient adversarial data for adversarial training.
Relationship between accuracy and robustness. Also relevant to this work are works that study the relationship between natural accuracy and robustness. PGD-based AT can enhance the robustness against adversarial data, but degrades the accuracy on the natural data significantly. One popular point is the inevitable tradeoff between robustness and natural accuracy. For example, Tsipras et al. (2018) claimed robustness and natural accuracy might at odds. Su et al. (2018) concluded a linearly negative correlation between the logarithm of natural accuracy and robustness. Zhang et al.
(2019) theoretically characterized the tradeoff. However, human is a network that is both robust and accurate with no tradeoff according to the definition of adversarial perturbation. Some other works also provide evidence that robustness and natural accuracy are not opposing. For example, Stutz et al. (2019) confirmed the existence of adversarial data on the manifold of natural data. Yang et al. (2020) showed benchmark datasets with adversarial perturbation are distributionally separated. Raghunathan et al. (2020) stated that additional unlabeled data help mitigate the tradeoff. Nakkiran (2019) proved that the tradeoff is due to the insufficient network expression ability. A separate but related line of works also has challenged the tradeoff by improving the natural accuracy while maintaining the robustness (Zhang et al., 2020a) or retaining the natural accuracy while improving the robustness (Zhang et al., 2020b). However, these works use PGD as the robustness evaluation, which is not always successful since these networks can be defeated by stronger attacks (Liu et al., 2021). In this work, we combine AutoAttack (Croce & Hein, 2020b), a stronger and more reliable robustness evaluation method, to conduct a more comprehensive evaluation of AT’s tradeoff.
3 STRENGTH-ADAPTIVE ADVERSARIAL TRAINING
In this section, we introduce the proposed strength-adaptive adversarial training (SAAT) and its learning objective as well as algorithmic realization.
3.1 MOTIVATIONS OF SAAT
Robustness disparity is intractable for AT with a pre-specified perturbation budget. For adversarially-trained networks, it is widely recognized that robust accuracy should be lower than the natural accuracy. Nevertheless, the degree of robustness disparity between natural and robust accuracy is often overlooked. Ideally, for a fully robust network, the robust accuracy and natural accuracy should be very close. The technique of maintaining a steady degree of robustness disparity is therefore critical for learning a robust network. However, current AT methods typically employ a pre-specified perturbation budget to generate adversarial data, whose attack strength is relative to the network’s model capacity, which fails to yield a steady degree of robustness disparity across different networks, as shown in Figure 2(a). For each network, we further numerically compute their degree of robustness disparity (RD): RD = 1n ∑n i=1 A0−Ai ϵi
, where Ai represents the accuracy on the perturbation budget ϵi. As shown by the statistical results in Figure 2(b), when the perturbation budget is fixed, the degree of robustness disparity becomes more prominent as the model capacity increases, which is not robust network’s desideratum. Thus, to maintain a steady degree of robustness disparity in adversarial training, we should open up perturbation budget constraints and flexibly adjust the attack strength of training data according to the network’s model capacity.
Robust overfitting degrades the network’s adversarial robustness. AT employs the most adversarial data to reduce the sensitivity of the network’s output w.r.t. adversarial perturbation of the natural data. However, during the training process, adversarial training data is generated on the fly and is getting weaker for networks with increasing adversarial robustness. Given a certain amount of allocatable model capacity, the adversarial training data with a pre-specified perturbation budget will inevitably induce training bias, which eventually leads to the robust overfitting. As shown in Figure 2(c), it is observed that when the perturbation budget is fixed, robust overfitting occurs as the network’s model capacity increases. We further compare the adversarial robustness between the “best” and “last” checkpoints in AT without robust overfitting and with robust overfitting. As shown in Figure 2(d), it can be seen that robust overfitting significantly degrades the network’s adversarial robustness. Therefore, to avoid robust overfitting in adversarial training, we should flexibly adjust the attack strength of adversarial training data to adapt to the dynamic training schedule.
3.2 LEARNING OBJECTIVE OF SAAT
Let ρ be the specified minimum adversarial loss constraint to adversarial training data. In the learning objective of SAAT, the outer minimization for optimizing model parameters still follows Eq.(2) or Eq.(1). However, instead of generating adversarial perturbation δ via inner maximization, we generate δ as follows:
δi = argmin δi
ℓ(fθ(xi + δi), yi) s.t. ℓ(fθ(xi + δi), yi) ≥ ρ. (5)
Note that the operator argmax in Eq.(3) is replaced with argmin here, and there is no explicit perturbation budget constraint for δ. Instead, we adopt the magnitude of adversarial loss to constrain the generation of adversarial perturbation. The constraint ensures that the loss of adversarial training data is greater than the specified minimum adversarial loss ρ. Among all such δ satisfying the constraint, we select the one minimizing ℓ(fθ(xi + δi), yi). In terms of the process of generating adversarial perturbations, Eq.(5) could be regarded as an adaptive adversary, since adversarial loss is related to the training schedule and network’s model capacity. For example, in the early stages of training, adversarial attack may be needless to generate qualified adversarial training data. However, in the later stages of training, more effort is required to generate the corresponding adversarial training data because the network is more robust. On the other hand, in terms of attack strength of training data, Eq.(5) is actually an attack strength-fixed adversary, since the adversarial loss of training data is constrained by a fixed ρ.
The learning objective of SAAT is to obtain the networks with a steady degree of robustness disparity, which is achieved by optimizing model using adversarial training data with a fixed attack strength (in terms of the adversarial loss ρ). ρ is used to guide model capacity scheduling during adversarial training, so as to ensure that the output network maintains a steady degree of robustness disparity.
Relation with natural training and standard AT. Notice that the learning objective of SAAT is extremely general. When ρ ≤ 0, SAAT is equivalent to natural training (refer to Eq.(1), then all training data does not need adversarial perturbations, so that most of the model capacity will be used to learn natural data. When ρ → ∞, SAAT is equivalent to standard AT (refer to Eq.(2), then all training data is the most adversarial data, so that a large amount of model capacity will be used for enhancing adversarial robustness (depending on the maximal perturbation budget). When 0 < ρ < ∞, SAAT lies in the middle of natural training and standard AT, which can manipulate the model capacity scheduling during the training phase, so as to obtain the output network with multiple alternative forms of adversarial robustness for various practical needs. Eq.(5) recovers both natural training and standard AT, thus it is a more general learning objective of adversarial training.
3.3 REALIZATION OF SAAT
The learning objective of SAAT implies the optimization of an adversarially robust networks with a steady degree of robustness disparity, with one step generating qualified adversarial training data and one step minimizing the adversarial loss w.r.t. the model parameters. Specifically, we search for qualified adversarial training data by adjusting the perturbation budget and attack step. For instance, given a perturbation budget, we perform sufficient PGD attacks within this budget. If the most adversarial data still does not satisfy the constraint of Eq.(5), we will increase the perturbation budget and conduct further PGD attacks until we find adversarial data that satisfies the minimum adversarial loss criterion.
How to estimate the optimal perturbation budget is an open question. Here we heuristically design a simple implementation: progressive search. Specifically, the perturbation budgets of training data are initially set to 0, and then their perturbation budgets are increased stepwise (e.g., increments of step size τ ). Each time the perturbation budgets are updated, it is followed by K iterations of PGD attacks until the generated adversarial data satisfies the minimum adversarial loss criterion. The limitation of this implementation is that setting the initial perturbation budget to 0 increases the
Algorithm 1 Strength-Adaptive PGD (SA-PGD) 1: Input: data x ∈ X , label y ∈ Y , model f , loss function ℓ, maximum perturbation budget ϵmax, minimum
adversarial loss ρ, perturbation budget ϵ, perturbation budget step size τ , SA-PGD step K, SA-PGD step size α
2: Output: x′ 3: x′ ← x; ϵ← 0 4: if ℓ(f(x′), y) ≥ ρ then 5: break 6: else 7: while ϵ < ϵmax do 8: ϵ← ϵ+ τ 9: for k = 1, ...,K do
10: x′ ← Πϵ(α · sign(∇x′ℓ(f(x′), y)) + x′) 11: if ℓ(f(x′), y) ≥ ρ then 12: return x′ 13: end if 14: end for 15: end while 16: end if
Algorithm 2 Strength-Adaptive Adversarial Training (SAAT) 1: Input: network fθ , training dataset S = {(xi, yi)}ni=1, learning rate η, number of epochs T , batch size
m, number of batches M 2: Output: adversarially robust network fθ with a target degree of robustness disparity 3: for epoch = 1, ..., T do 4: for mini-batch = 1, ...,M do 5: Sample a mini-batch {(xi, yi)}mi=1 from S 6: for i = 1, ...,m (in parallel) do 7: Obtain adversarial data x′i of xi by Algorithm 1 8: end for 9: θ ← θ − η 1
m ∑m i=1∇θℓ(fθ(x ′ i), yi)
10: end for 11: end for
computational cost and slows the speed, even though it ensures that the optimal perturbation budget will be estimated.
A maximally allowed perturbation budget (ϵmax) is introduced: we observe that even with the largest perturbation (e.g., ϵ = 255/255), there are still some examples (outliers) that fail to satisfy the minimum adversarial loss constraint. Considering that the pixel values are strictly sampled in [0, 255/255], it is necessary to introduce a maximum perturbation budget to avoid infinite loops.
Algorithm 1 is our strength-adaptive PGD method (SA-PGD), which returns the generated adversarial training data. Algorithm 2 is the proposed strength-adaptive adversarial training (SAAT). SAAT leverages Algorithm 1 for obtaining the qualified adversarial data to optimize the model parameters.
4 EXPERIMENTS
In this section, we conduct comprehensive experiments to evaluate the effectiveness of SAAT including its experimental setup (in Section 4.1), algorithm analysis (in Section 4.2), robustness evaluation (in Section 4.3), and the performance under different model capacities (in Section 4.4).
4.1 EXPERIMENTAL SETUP
Our code is implemented on the open source PyTorch framework with a single NVIDIA A100SXM4-40GB GPU. The code as well as related models will be released for public use and verification. We conduct experiments on ℓ∞ threat model and follow the hyper-parameter setting of Rice et al. (2020) for a fair comparison with the state-of-the-art AT methods. For training, the network is trained for 200 epochs using SGD with momentum 0.9, weight decay 5 × 10−4, and an initial learning rate of 0.1. The learning rate is divided by 10 at the 100-th and 150-th epoch, re-
spectively. Conventional data augmentation including random crops with 4 pixels of padding and random horizontal flips are applied. For adversary, we use SA-PGD (Algorithm 1) to generate adversarial training data. The step size α = 2/255 is used following standard PGD (Madry et al., 2017). We adopt the perturbation budget step size τ consistent with the SA-PGD step size α, such as τ = 2/255, and SA-PGD step K = 3, which is to make the adversarial data sufficiently attacked under the updated perturbation budget. For robustness evaluation, the output model is tested under a series of adversaries, including natural data, PGD (Madry et al., 2017), and Auto Attack (Croce & Hein, 2020b). Among them, natural accuracy and PGD accuracy can intuitively reflect the network’s robustness disparity. And AA is an ensemble of complementary attacks, consisting of three white-box attacks (APGD-CE (Croce & Hein, 2020b), APGD-DLR (Croce & Hein, 2020b), and FAB (Croce & Hein, 2020a)) and a black-box attack (Square Attack (Andriushchenko et al., 2020)). AA regards networks to be robust only if the model correctly classify adversarial data with all types of attacks, which is among the most reliable evaluation of adversarial robustness to date.
4.2 ANALYSIS OF THE PROPOSED ALGORITHM
We delve into SAAT to investigate its each component, including the role of minimum adversarial loss ρ and the impact of maximum perturbation budget ϵmax. All analysis experiments are conducted using PreAct ResNet-18 (He et al., 2016) on CIFAR10 dataset (Krizhevsky et al., 2009).
The role of minimum adversarial loss ρ. We empirically investigate the role of minimum adversarial loss by using different ρ to generate adversarial training data, where ϵmax is assigned a sufficiently large value, such as ϵmax = 128/255. The minimum adversarial loss ρ for SAAT varies from 0 to 2.2, and the evaluation results are summarized in Figure 3 (a). It is observed that the model’s degree of robustness disparity is well-correlated with the adversarial training loss ρ. When ρ = 0, there is a large performance gap between the natural accuracy and robust accuracy. This performance gap keeps decreasing as ρ increases. And when ρ = 2.2, the robust accuracy is almost the same as the natural accuracy. Note that SAAT fails to converge when ρ > 2.2, which might be explained by the fact that robust accuracy should be lower than natural accuracy for any adversarially-trained networks. The clear correlation between the minimum training loss and degree of robustness disparity enables our SAAT to flexibly control the performance gap in output networks. Moreover, we observed that ρ is also closely related to the robust generalization gap. The learning cruves of SAAT with different ρ is shown in Figure 3 (c), and their robust generalization gap is summarized in Figure 3 (d). It can be seen that the robust accuracy is always in sync with the natural accuracy during the training phase and there is no significant robustness degradation as in Figure 1(c). When ρ = 1.5, the robust generalization gap is already very small.
The impact of maximum perturbation budget ϵmax. We further investigate the impact of introduced maximum perturbation budget, by comparing the robustness performance of models trained using different ϵmax. Given a fixed ρ, such as ρ = 1.5, the value of ϵmax varies from 0 to 128/255, and the evaluation results are summarized in Figure 3 (b). It can be observed that when ϵmax is small, increasing ϵmax leads to a flatter degree of robustness disparity, which indicates that more model capacity is allocated to defend against adversarial attacks. As expected, when ϵmax is greater than a certain value, the degree of robustness disparity begins to maintain a plateau, which infers that most of the adversarial training data already met the minimum adversarial loss constraint, so the network tends to maintain steady robustness disparity bound by ρ. ϵmax adjusts the robustness disparity within the plateau constrained by ρ, which reflects a tradeoff between the natural accuracy and adversarial robustness and suggests that our ϵmax helps adjust this tradeoff. Notably, the observed tradeoff is not inherent in adversarial training but a consequence of model capacity scheduling. Note that the size of ϵmax is also influenced by minimum adversarial loss ρ. In the following section, we fine-tune both ϵmax and ρ for the robustness evaluation of SAAT.
4.3 ROBUSTNESS EVALUATION
Compared with the standard AT, the difference of SAAT mainly lies in weakening the easy-to-attack samples (the adversarial loss is higher than the minimal adversarial loss) and enhancing the hardto-attack samples (the adversarial loss is lower than the minimal adversarial loss). In this part, we investigate their respective effects on network adversarial robustness on two classic baselines: AT and AWP, where AWP can suppress robust overfitting and achieve state-of-the-art adversarial robust-
0
10
20
30
40
50
60
70
80
90
100
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget (*/255)
𝝆𝝆 0.0
2.2
(a) Role of minimum adversarial loss
0
10
20
30
40
50
60
70
80
90
100
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget (*/255)
𝝐𝝐𝒎𝒎𝒎𝒎𝒎𝒎 0
128/255
(b) Impact of maximum perturbation budget
0
10
20
30
40
50
60
70
80
90
100
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
10
20
30
40
50
60
70
80
90
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch
Natural training Adversarial training Natural test Adversarial test
0
10
20
30
40
50
60
70
80
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
0
5
10
15
20
25
30
35
40
45
50
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
(c) Learning curves of SAAT with ρ of 0.5, 1.0, 1.5 and 2.0 (from left to right)
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34-1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100 150 200
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SAAT (𝜌𝜌=0.5) SAAT (𝜌𝜌=1.0) SAAT (𝜌𝜌=1.5) SAAT (𝜌𝜌=2.0) Standard AT
(d) Robust generalization gap
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34-1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100 150 200
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SAAT (𝜌𝜌=0.5) SAAT (𝜌𝜌=1.0) SAAT (𝜌𝜌=1.5) SAAT (𝜌𝜌=2.0) Standard AT
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 2 4 6 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34 1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SA T (𝜌𝜌=0.5) SAAT (𝜌𝜌=1. ) 𝜌𝜌 . ) SA T (𝜌𝜌=2.0) Standard
(e) SAAT under different model capacities
Figure 3: Robustness evaluation of SAAT with (a) varied ρ and (b) varied ϵmax; The (c) learning curve and (d) robust generalization gap with different ρ; (e) robustness disparity on networks with different model capacities.
ness. Specifically, we process the easy-to-attack samples and the hard-to-attack samples separately, denoted as SAATdown and SAATup, respectively. For SAATdown, the maximum adversarial budget remains the same as the standard AT, e.g., ϵmax = 8, and the minimal adversarial loss is set to 1.5. The evaluation results are shown in Table 1. As expected, weakening the easy-to-attack samples will make adversarial training inclined to the weak attack defense, which significantly increase the natural accuracy and can maintain the PGD accuracy, but it essentially not only aggravates the robustness disparity, but also degrades the model adversarial robustness (in term of AA). For SAATup, by increasing the maximum adversarial budget and minimal adversarial loss, more model capacity is used to defend against adversarial attacks, which significantly alleviates the robustness disparity of the output network and further enhances the model adversarial robustness. The performance evaluation of SAATup across different datasets, different model structures and different AT methods is provided in Appendix A, where SAATup boosts adversarial robustness under all settings, demonstrating the effectiveness of the proposed method. Note that the robust accuracy of SAATup at ϵ = 8 first increases and then decreases. This is defensible because as ϵ and ρ increase, the robustness disparity of the output model is getting flatter, and the adversarial robustness will also be migrated to a relatively larger perturbation budget. In addition, it can be observed that the performance gap between the last checkpoint and the best checkpoint of SAATup is shrinking, which illustrates that enhancing the hard-to-attack samples can also effectively alleviate robust overfitting.
4.4 PERFORMANCE UNDER DIFFERENT MODEL CAPACITIES
We extend SAAT to networks with different model capacities. Specifically, we perform SAAT on a series of Wide ResNet-34-x networks, where x is 1, 3, 5, and 7 respectively. Note that the larger the x, the larger the model capacity. ρ is fixed at 1.5 and ϵmax is sufficiently large, such
as ϵmax = 128/255. The evaluation results are summarized in Figure 3 (e). It can be observed that the output network can maintain a steady degree of robustness disparity across different model capacities. Such observations exactly reflect the nature of our approach that SAAT is able to adapt to networks of various model capacities.
5 CONCLUSION
We present a strength-adaptive adversarial training (SSAT) method in this paper. The proposed approach distinguish itself from others by using the minimum adversarial loss constraint for generating adversarial training data, which is adaptive to the dynamic training schedule and networks of various model capacities. We show that adversarial training loss is well-correlated with the degree of robustness disparity and robust generalization gap, empirically verify that SAAT can effectively alleviate robustness overfitting, mitigate the robustness disparity of output networks and further enhance the model adversarial robustness by adjusting the tradeoff of adversarial training, demonstrating the effectiveness of the proposed approach. We hope that the robustness disparity we offer can further improve the completeness of adversarial robustness evaluation methods (Carlini et al., 2019) and expect more new techniques to be proposed to achieve learning a fully robust network.
A APPENDIX
In this part, we conduct extended comparative experiments. Specifically, we conduct experiments under different datasets, different model structures and different AT methods. The results are summarized in Table 2 and Table 3. Experimental results show that the proposed method can achieve higher adversarial robustness under all settings, demonstrating the effectiveness of the proposed approach. | 1. What is the main contribution of the paper regarding adversarial training?
2. What are the strengths and weaknesses of the proposed method compared to standard adversarial training?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are some potential concerns or limitations of the proposed approach that the reviewer mentions?
5. Are there any recent works related to adaptive adversarial training that the reviewer thinks the author should have considered in their experiments? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper discusses the issue of adversarial training over-fitting, pointing out that a pre-specified perturbation budget is not optimal as the training progresses the perturbation budget should be adjusted accordingly. Based on this intuition, the author proposes a new adversarial training method that generates adversarial examples by maintaining a minimum adversarial loss instead of searching for the point that maximizes the loss. Experiments on CIFAR10 show that the proposed method outperforms standard adversarial training and when combined with AWP, it also outperforms standard adversarial training with AWP.
Strengths And Weaknesses
Strength:
The paper is clearly written with intuition and finding discussed in detail.
Experiments show that the proposed method outperforms standard adversarial training on CIFAR10
Weakness:
The proposed method lacks theoretical and empirical support. Even though, some experiments are done on CIFAR10, but one dataset is not enough to show that the method works well in general. Besides, there have been many new adversarial training methods proposed since 2018 (the year standard adversarial training was proposed). The author also discusses some in the paper, but why not compare with those methods in the experiments? There have also been works about adaptive adversarial training. Lack of experiments is the key issue of the paper.
Clarity, Quality, Novelty And Reproducibility
Clarity:
The paper is clearly written and easy to follow.
Quality:
The paper is well written but to support the claim more experiments need to be done.
Novelty:
The idea is novel and interesting. As far as I know, no one has proposed similar adaptive ideas, but there are some works about adaptive epsilon in adversarial training.
Reproducibility:
Though some implementation details are mentioned in the paper, code is not provided. |
ICLR | Title
Strength-Adaptive Adversarial Training
Abstract
Adversarial training (AT) is proved to reliably improve network’s robustness against adversarial data. However, current AT with a pre-specified perturbation budget has limitations in learning a robust network. Firstly, applying a prespecified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network’s desideratum. Secondly, the attack strength of adversarial training data constrained by the pre-specified perturbation budget fails to upgrade as the growth of network robustness, which leads to robust overfitting and further degrades the adversarial robustness. To overcome these limitations, we propose Strength-Adaptive Adversarial Training (SAAT). Specifically, the adversary employs an adversarial loss constraint to generate adversarial training data. Under this constraint, the perturbation budget will be adaptively adjusted according to the training state of adversarial data, which can effectively avoid robust overfitting. Besides, SAAT explicitly constrains the attack strength of training data through the adversarial loss, which manipulates model capacity scheduling during training, and thereby can flexibly control the degree of robustness disparity and adjust the tradeoff between natural accuracy and robustness. Extensive experiments show that our proposal boosts the robustness of adversarial training.
1 INTRODUCTION
Current deep neural networks (DNNs) achieve impressive breakthroughs on a variety of fields such as computer vision (He et al., 2016), speech recognition (Wang et al., 2017), and NLP (Devlin et al., 2018), but it is well-known that DNNs are vulnerable to adversarial data: small perturbations of the input which are imperceptible to humans will cause wrong outputs (Szegedy et al., 2013; Goodfellow et al., 2014). As countermeasures against adversarial data, adversarial training (AT) is a method for hardening networks against adversarial attacks (Madry et al., 2017). AT trains the network using adversarial data that are constrained by a pre-specified perturbation budget, which aims to obtain the output network with the minimum adversarial risk of an sample to be wrongly classified under the same perturbation budget. Across existing defense techniques, AT has been proved to be one of the most effective and reliable methods against adversarial attacks (Athalye et al., 2018).
Although promising to improve the network’s robustness, AT with a pre-specified perturbation budget still has limitations in learning a robust network. Firstly, the pre-specified perturbation budget is inadaptable for networks of various model capacities, yielding divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network’s desideratum. Ideally, for a robust network, perturbing the attack budget within a small range should not cause signifcant accuracy degradation. Unfortunately, the degree of robustness disparity is intractable for AT with a pre-specified perturbation budget. In standard AT, there could be a prominent degree of robustness disparity in output networks. For instance, a standard PGD adversarially-trained PreAct ResNet18 network has 84% natural accuracy and only 46% robust accuracy on CIFAR10 under ℓ∞ threat model, as shown in Figure 1(a). Empirically, we have to increase the pre-specified perturbation budget to allocate more model capacity for defense against adversarial attacks to mitigate the degree of robustness disparity, as shown in Figure 1(b). However, the feasible range of perturbation budget is different for networks with different model capacities. For example, AT with perturbation budget ϵ = 40/255 will make PreAct ResNet-18 optimization collapse, while wide ResNet-34-10 can learn normally. In order to maintain a steady degree of robustness disparity, we have to find separate perturbation budgets for each network with different model capacities. Therefore, it may be pessimistic to use AT with a pre-specified perturbation budget to learn a robust network.
Secondly, the attack strength of adversarial training data constrained by the pre-specified perturbation budget is gradually weakened as the growth of network robustness. During the training process, adversarial training data are generated on the fly and are changed based on the updating of the network. As the the network’s adversarial robustness continues to increase, the attack strength of adversarial training data with the pre-specified perturbation budget is getting relatively weaker. Given the limited network capacity, a degenerate or stagnant adversary accompanied by an evolving network will easily cause training bias: adversarial training is more inclined to the defense against weak strength attacks, and thereby erodes defenses on strong strength attacks, leading to the undesirable robust overfitting, as shown in Figure 1(c). Moreover, compared with the “best” checkpoint in AT with robust overfitting, the “last” checkpoint’s defense advantage in weak strength attack is slight, while its defense disadvantage in strong strength attack is significant, which indicates that robust overfitting not only exacerbates the degree of robustness disparity, but also further degrades the adversarial robustness. Thus, it may be deficient to use adversarial data with a pre-specified perturbation budget to train a robust network.
To overcome these limitations, we propose strength-adaptive adversarial training (SAAT), which employs an adversarial loss constraint to generate adversarial training data. The adversarial perturbation generated under this constraint is adaptive to the dynamic training schedule and networks of various model capacities. Specifically, as adversarial training progresses, a larger perturbation budget is required to satisfy the adversarial loss constraint since the network becomes more robust. Thus, the perturbation budgets in our SAAT is adaptively adjusted according to the training state of adversarial data, which restrains the training bias and effectively avoids robust overfitting. Besides, SAAT explicitly constrains the attack strength of training data by the adversarial loss constraint, which guides model capacity scheduling in adversarial training, and thereby can flexibly adjust the tradeoff between natural accuracy and robustness, ensuring that the output network maintains a steady degree of robustness disparity even under networks with different model capacities.
Our contributions are as follows. (a) In standard AT, we characterize the pessimism of adversary with a pre-specified perturbation budget, which is due to the intractable robustness disparity and undesirable robust overfitting (in Section 3.1). (b) We propose a new adversarial training method, i.e., SAAT (its learning objective in Section 3.2 and its realization in Section 3.3). SAAT is a general adversarial training method that can be easily converted to natural training or standard AT. (c) Empirically, we find that adversarial training loss is well-correlated with the degree of robustness disparity and robust generalization gap (in Section 4.2), which enables our SAAT to overcome the issue of robust overfitting and flexibly adjust the tradeoff of adversarial training, leading to the improved natural accuracy and robustness (in Section 4.3).
2 PRELIMINARY AND RELATED WORK
In this section, we review the adversarial training method and related works.
2.1 ADVERSARIAL TRAINING
Learning objective. Let fθ, X and ℓ be the network f with trainable model parameter θ, input feature space, and loss function, respectively. Given a C-class dataset S = {(xi, yi)}ni=1, where xi ∈ X and yi ∈ Y = {0, 1, ..., C − 1} as its associated label. In natural training, most machine
learning tasks could be formulated as solving the following optimization problem:
min θ
1
n n∑ i=1 ℓ(fθ(xi), yi). (1)
The learning objective of natural training is to obtain the networks that have the minimum empirical risk of a natural input to be wrongly classified. In adversarial training, the adversary adds the adversarial perturbation to each sample, i.e., transform S = {(xi, yi)}ni=1 to S ′ = {(x′i = xi + δi, yi)}ni=1. The adversarial perturbation {δi}ni=1 are constrained by a pre-specified budget, i.e. {δ ∈ ∆ : ||δ||p ≤ ϵ}, where p can be 1, 2,∞, etc. In order to defend such attack, standard adversarial training (AT) (Madry et al., 2017) resort to solve the following objective function:
min θ
1
n n∑ i=1 max δi∈∆ ℓ(fθ(xi + δi), yi). (2)
Note that the outer minimization remains the same as Eq.(1), and the inner maximization operator can also be re-written as
δi = argmax δi∈∆ ℓ(fθ(xi + δi), yi), (3)
where x′i = xi + δi is the most adversarial data within the perturbation budget ∆. Standard AT employs the most adversarial data generated according to Eq.(3) for updating the current model. The learning objective of standard AT is to obtain the networks that have the minimum adversarial risk of a input to be wrongly classified under the pre-specified perturbation budget.
Realizations. The objective functions of standard AT (Eq.(2)) is a composition of an inner maximization problem and an outer minimization problem, with one step generating adversarial data and one step minimizing loss on the generated adversarial data w.r.t. the model parameters θ. For the outer minimization problem, Stochastic Gradient Descent (SGD) (Bottou, 1999) and its variants are widely used to optimize the model parameters (Rice et al., 2020). For the inner maximization problem, the Projected Gradient Descent (PGD) (Madry et al., 2017) is the most common approximation method for generating adversarial perturbation, which can be viewed as a multi-step variant of Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014). Given normal example x ∈ X and step size α > 0, PGD works as follows:
δk+1 = Π∆(α · sign∇xℓ(f(x+ δk), y) + δk), k ∈ N, (4) where δk is adversarial perturbation at step k; and Π∆ is the projection function that project the adversarial perturbation back into the pre-specified budget ∆ if necessary.
2.2 RELATED WORK
Stopping criteria. There are different stopping criteria for PGD-based adversarial training. For example, standard AT (Madry et al., 2017) employs a fixed number of iterations K, namely PGD-K, which is commonly used in many outstanding adversarial training variants, such as TRADES (Zhang et al., 2019), MART (Wang et al., 2019b), and RST (Carmon et al., 2019). Besides, some works have further enhanced the PGD-K method by incorporating additional optimization mechanisms, such as curriculum learning (Cai et al., 2018), FOSC (Wang et al., 2019a), and geometry reweighting (Zhang et al., 2020b). On the other hand, some works adopt different PGD stopping criterion, i.e., misclassification-aware criterion, which stops the iterations once the network misclassifies the adversarial data. This misclassification-aware criterion is widely used in the emerging adversarial training variants, such as FAT (Zhang et al., 2020a), MMA (Ding et al., 2018), IAAT (Balaji et al., 2019), ATES (Sitawarin et al., 2020), and Customized AT (Cheng et al., 2020). Different from these works, we propose strength-adaptive PGD (SA-PGD) that uses the minimum adversarial loss as the stopping criterion to generate efficient adversarial data for adversarial training.
Relationship between accuracy and robustness. Also relevant to this work are works that study the relationship between natural accuracy and robustness. PGD-based AT can enhance the robustness against adversarial data, but degrades the accuracy on the natural data significantly. One popular point is the inevitable tradeoff between robustness and natural accuracy. For example, Tsipras et al. (2018) claimed robustness and natural accuracy might at odds. Su et al. (2018) concluded a linearly negative correlation between the logarithm of natural accuracy and robustness. Zhang et al.
(2019) theoretically characterized the tradeoff. However, human is a network that is both robust and accurate with no tradeoff according to the definition of adversarial perturbation. Some other works also provide evidence that robustness and natural accuracy are not opposing. For example, Stutz et al. (2019) confirmed the existence of adversarial data on the manifold of natural data. Yang et al. (2020) showed benchmark datasets with adversarial perturbation are distributionally separated. Raghunathan et al. (2020) stated that additional unlabeled data help mitigate the tradeoff. Nakkiran (2019) proved that the tradeoff is due to the insufficient network expression ability. A separate but related line of works also has challenged the tradeoff by improving the natural accuracy while maintaining the robustness (Zhang et al., 2020a) or retaining the natural accuracy while improving the robustness (Zhang et al., 2020b). However, these works use PGD as the robustness evaluation, which is not always successful since these networks can be defeated by stronger attacks (Liu et al., 2021). In this work, we combine AutoAttack (Croce & Hein, 2020b), a stronger and more reliable robustness evaluation method, to conduct a more comprehensive evaluation of AT’s tradeoff.
3 STRENGTH-ADAPTIVE ADVERSARIAL TRAINING
In this section, we introduce the proposed strength-adaptive adversarial training (SAAT) and its learning objective as well as algorithmic realization.
3.1 MOTIVATIONS OF SAAT
Robustness disparity is intractable for AT with a pre-specified perturbation budget. For adversarially-trained networks, it is widely recognized that robust accuracy should be lower than the natural accuracy. Nevertheless, the degree of robustness disparity between natural and robust accuracy is often overlooked. Ideally, for a fully robust network, the robust accuracy and natural accuracy should be very close. The technique of maintaining a steady degree of robustness disparity is therefore critical for learning a robust network. However, current AT methods typically employ a pre-specified perturbation budget to generate adversarial data, whose attack strength is relative to the network’s model capacity, which fails to yield a steady degree of robustness disparity across different networks, as shown in Figure 2(a). For each network, we further numerically compute their degree of robustness disparity (RD): RD = 1n ∑n i=1 A0−Ai ϵi
, where Ai represents the accuracy on the perturbation budget ϵi. As shown by the statistical results in Figure 2(b), when the perturbation budget is fixed, the degree of robustness disparity becomes more prominent as the model capacity increases, which is not robust network’s desideratum. Thus, to maintain a steady degree of robustness disparity in adversarial training, we should open up perturbation budget constraints and flexibly adjust the attack strength of training data according to the network’s model capacity.
Robust overfitting degrades the network’s adversarial robustness. AT employs the most adversarial data to reduce the sensitivity of the network’s output w.r.t. adversarial perturbation of the natural data. However, during the training process, adversarial training data is generated on the fly and is getting weaker for networks with increasing adversarial robustness. Given a certain amount of allocatable model capacity, the adversarial training data with a pre-specified perturbation budget will inevitably induce training bias, which eventually leads to the robust overfitting. As shown in Figure 2(c), it is observed that when the perturbation budget is fixed, robust overfitting occurs as the network’s model capacity increases. We further compare the adversarial robustness between the “best” and “last” checkpoints in AT without robust overfitting and with robust overfitting. As shown in Figure 2(d), it can be seen that robust overfitting significantly degrades the network’s adversarial robustness. Therefore, to avoid robust overfitting in adversarial training, we should flexibly adjust the attack strength of adversarial training data to adapt to the dynamic training schedule.
3.2 LEARNING OBJECTIVE OF SAAT
Let ρ be the specified minimum adversarial loss constraint to adversarial training data. In the learning objective of SAAT, the outer minimization for optimizing model parameters still follows Eq.(2) or Eq.(1). However, instead of generating adversarial perturbation δ via inner maximization, we generate δ as follows:
δi = argmin δi
ℓ(fθ(xi + δi), yi) s.t. ℓ(fθ(xi + δi), yi) ≥ ρ. (5)
Note that the operator argmax in Eq.(3) is replaced with argmin here, and there is no explicit perturbation budget constraint for δ. Instead, we adopt the magnitude of adversarial loss to constrain the generation of adversarial perturbation. The constraint ensures that the loss of adversarial training data is greater than the specified minimum adversarial loss ρ. Among all such δ satisfying the constraint, we select the one minimizing ℓ(fθ(xi + δi), yi). In terms of the process of generating adversarial perturbations, Eq.(5) could be regarded as an adaptive adversary, since adversarial loss is related to the training schedule and network’s model capacity. For example, in the early stages of training, adversarial attack may be needless to generate qualified adversarial training data. However, in the later stages of training, more effort is required to generate the corresponding adversarial training data because the network is more robust. On the other hand, in terms of attack strength of training data, Eq.(5) is actually an attack strength-fixed adversary, since the adversarial loss of training data is constrained by a fixed ρ.
The learning objective of SAAT is to obtain the networks with a steady degree of robustness disparity, which is achieved by optimizing model using adversarial training data with a fixed attack strength (in terms of the adversarial loss ρ). ρ is used to guide model capacity scheduling during adversarial training, so as to ensure that the output network maintains a steady degree of robustness disparity.
Relation with natural training and standard AT. Notice that the learning objective of SAAT is extremely general. When ρ ≤ 0, SAAT is equivalent to natural training (refer to Eq.(1), then all training data does not need adversarial perturbations, so that most of the model capacity will be used to learn natural data. When ρ → ∞, SAAT is equivalent to standard AT (refer to Eq.(2), then all training data is the most adversarial data, so that a large amount of model capacity will be used for enhancing adversarial robustness (depending on the maximal perturbation budget). When 0 < ρ < ∞, SAAT lies in the middle of natural training and standard AT, which can manipulate the model capacity scheduling during the training phase, so as to obtain the output network with multiple alternative forms of adversarial robustness for various practical needs. Eq.(5) recovers both natural training and standard AT, thus it is a more general learning objective of adversarial training.
3.3 REALIZATION OF SAAT
The learning objective of SAAT implies the optimization of an adversarially robust networks with a steady degree of robustness disparity, with one step generating qualified adversarial training data and one step minimizing the adversarial loss w.r.t. the model parameters. Specifically, we search for qualified adversarial training data by adjusting the perturbation budget and attack step. For instance, given a perturbation budget, we perform sufficient PGD attacks within this budget. If the most adversarial data still does not satisfy the constraint of Eq.(5), we will increase the perturbation budget and conduct further PGD attacks until we find adversarial data that satisfies the minimum adversarial loss criterion.
How to estimate the optimal perturbation budget is an open question. Here we heuristically design a simple implementation: progressive search. Specifically, the perturbation budgets of training data are initially set to 0, and then their perturbation budgets are increased stepwise (e.g., increments of step size τ ). Each time the perturbation budgets are updated, it is followed by K iterations of PGD attacks until the generated adversarial data satisfies the minimum adversarial loss criterion. The limitation of this implementation is that setting the initial perturbation budget to 0 increases the
Algorithm 1 Strength-Adaptive PGD (SA-PGD) 1: Input: data x ∈ X , label y ∈ Y , model f , loss function ℓ, maximum perturbation budget ϵmax, minimum
adversarial loss ρ, perturbation budget ϵ, perturbation budget step size τ , SA-PGD step K, SA-PGD step size α
2: Output: x′ 3: x′ ← x; ϵ← 0 4: if ℓ(f(x′), y) ≥ ρ then 5: break 6: else 7: while ϵ < ϵmax do 8: ϵ← ϵ+ τ 9: for k = 1, ...,K do
10: x′ ← Πϵ(α · sign(∇x′ℓ(f(x′), y)) + x′) 11: if ℓ(f(x′), y) ≥ ρ then 12: return x′ 13: end if 14: end for 15: end while 16: end if
Algorithm 2 Strength-Adaptive Adversarial Training (SAAT) 1: Input: network fθ , training dataset S = {(xi, yi)}ni=1, learning rate η, number of epochs T , batch size
m, number of batches M 2: Output: adversarially robust network fθ with a target degree of robustness disparity 3: for epoch = 1, ..., T do 4: for mini-batch = 1, ...,M do 5: Sample a mini-batch {(xi, yi)}mi=1 from S 6: for i = 1, ...,m (in parallel) do 7: Obtain adversarial data x′i of xi by Algorithm 1 8: end for 9: θ ← θ − η 1
m ∑m i=1∇θℓ(fθ(x ′ i), yi)
10: end for 11: end for
computational cost and slows the speed, even though it ensures that the optimal perturbation budget will be estimated.
A maximally allowed perturbation budget (ϵmax) is introduced: we observe that even with the largest perturbation (e.g., ϵ = 255/255), there are still some examples (outliers) that fail to satisfy the minimum adversarial loss constraint. Considering that the pixel values are strictly sampled in [0, 255/255], it is necessary to introduce a maximum perturbation budget to avoid infinite loops.
Algorithm 1 is our strength-adaptive PGD method (SA-PGD), which returns the generated adversarial training data. Algorithm 2 is the proposed strength-adaptive adversarial training (SAAT). SAAT leverages Algorithm 1 for obtaining the qualified adversarial data to optimize the model parameters.
4 EXPERIMENTS
In this section, we conduct comprehensive experiments to evaluate the effectiveness of SAAT including its experimental setup (in Section 4.1), algorithm analysis (in Section 4.2), robustness evaluation (in Section 4.3), and the performance under different model capacities (in Section 4.4).
4.1 EXPERIMENTAL SETUP
Our code is implemented on the open source PyTorch framework with a single NVIDIA A100SXM4-40GB GPU. The code as well as related models will be released for public use and verification. We conduct experiments on ℓ∞ threat model and follow the hyper-parameter setting of Rice et al. (2020) for a fair comparison with the state-of-the-art AT methods. For training, the network is trained for 200 epochs using SGD with momentum 0.9, weight decay 5 × 10−4, and an initial learning rate of 0.1. The learning rate is divided by 10 at the 100-th and 150-th epoch, re-
spectively. Conventional data augmentation including random crops with 4 pixels of padding and random horizontal flips are applied. For adversary, we use SA-PGD (Algorithm 1) to generate adversarial training data. The step size α = 2/255 is used following standard PGD (Madry et al., 2017). We adopt the perturbation budget step size τ consistent with the SA-PGD step size α, such as τ = 2/255, and SA-PGD step K = 3, which is to make the adversarial data sufficiently attacked under the updated perturbation budget. For robustness evaluation, the output model is tested under a series of adversaries, including natural data, PGD (Madry et al., 2017), and Auto Attack (Croce & Hein, 2020b). Among them, natural accuracy and PGD accuracy can intuitively reflect the network’s robustness disparity. And AA is an ensemble of complementary attacks, consisting of three white-box attacks (APGD-CE (Croce & Hein, 2020b), APGD-DLR (Croce & Hein, 2020b), and FAB (Croce & Hein, 2020a)) and a black-box attack (Square Attack (Andriushchenko et al., 2020)). AA regards networks to be robust only if the model correctly classify adversarial data with all types of attacks, which is among the most reliable evaluation of adversarial robustness to date.
4.2 ANALYSIS OF THE PROPOSED ALGORITHM
We delve into SAAT to investigate its each component, including the role of minimum adversarial loss ρ and the impact of maximum perturbation budget ϵmax. All analysis experiments are conducted using PreAct ResNet-18 (He et al., 2016) on CIFAR10 dataset (Krizhevsky et al., 2009).
The role of minimum adversarial loss ρ. We empirically investigate the role of minimum adversarial loss by using different ρ to generate adversarial training data, where ϵmax is assigned a sufficiently large value, such as ϵmax = 128/255. The minimum adversarial loss ρ for SAAT varies from 0 to 2.2, and the evaluation results are summarized in Figure 3 (a). It is observed that the model’s degree of robustness disparity is well-correlated with the adversarial training loss ρ. When ρ = 0, there is a large performance gap between the natural accuracy and robust accuracy. This performance gap keeps decreasing as ρ increases. And when ρ = 2.2, the robust accuracy is almost the same as the natural accuracy. Note that SAAT fails to converge when ρ > 2.2, which might be explained by the fact that robust accuracy should be lower than natural accuracy for any adversarially-trained networks. The clear correlation between the minimum training loss and degree of robustness disparity enables our SAAT to flexibly control the performance gap in output networks. Moreover, we observed that ρ is also closely related to the robust generalization gap. The learning cruves of SAAT with different ρ is shown in Figure 3 (c), and their robust generalization gap is summarized in Figure 3 (d). It can be seen that the robust accuracy is always in sync with the natural accuracy during the training phase and there is no significant robustness degradation as in Figure 1(c). When ρ = 1.5, the robust generalization gap is already very small.
The impact of maximum perturbation budget ϵmax. We further investigate the impact of introduced maximum perturbation budget, by comparing the robustness performance of models trained using different ϵmax. Given a fixed ρ, such as ρ = 1.5, the value of ϵmax varies from 0 to 128/255, and the evaluation results are summarized in Figure 3 (b). It can be observed that when ϵmax is small, increasing ϵmax leads to a flatter degree of robustness disparity, which indicates that more model capacity is allocated to defend against adversarial attacks. As expected, when ϵmax is greater than a certain value, the degree of robustness disparity begins to maintain a plateau, which infers that most of the adversarial training data already met the minimum adversarial loss constraint, so the network tends to maintain steady robustness disparity bound by ρ. ϵmax adjusts the robustness disparity within the plateau constrained by ρ, which reflects a tradeoff between the natural accuracy and adversarial robustness and suggests that our ϵmax helps adjust this tradeoff. Notably, the observed tradeoff is not inherent in adversarial training but a consequence of model capacity scheduling. Note that the size of ϵmax is also influenced by minimum adversarial loss ρ. In the following section, we fine-tune both ϵmax and ρ for the robustness evaluation of SAAT.
4.3 ROBUSTNESS EVALUATION
Compared with the standard AT, the difference of SAAT mainly lies in weakening the easy-to-attack samples (the adversarial loss is higher than the minimal adversarial loss) and enhancing the hardto-attack samples (the adversarial loss is lower than the minimal adversarial loss). In this part, we investigate their respective effects on network adversarial robustness on two classic baselines: AT and AWP, where AWP can suppress robust overfitting and achieve state-of-the-art adversarial robust-
0
10
20
30
40
50
60
70
80
90
100
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget (*/255)
𝝆𝝆 0.0
2.2
(a) Role of minimum adversarial loss
0
10
20
30
40
50
60
70
80
90
100
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget (*/255)
𝝐𝝐𝒎𝒎𝒎𝒎𝒎𝒎 0
128/255
(b) Impact of maximum perturbation budget
0
10
20
30
40
50
60
70
80
90
100
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
10
20
30
40
50
60
70
80
90
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch
Natural training Adversarial training Natural test Adversarial test
0
10
20
30
40
50
60
70
80
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
0
5
10
15
20
25
30
35
40
45
50
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
(c) Learning curves of SAAT with ρ of 0.5, 1.0, 1.5 and 2.0 (from left to right)
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34-1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100 150 200
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SAAT (𝜌𝜌=0.5) SAAT (𝜌𝜌=1.0) SAAT (𝜌𝜌=1.5) SAAT (𝜌𝜌=2.0) Standard AT
(d) Robust generalization gap
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34-1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100 150 200
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SAAT (𝜌𝜌=0.5) SAAT (𝜌𝜌=1.0) SAAT (𝜌𝜌=1.5) SAAT (𝜌𝜌=2.0) Standard AT
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 2 4 6 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34 1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SA T (𝜌𝜌=0.5) SAAT (𝜌𝜌=1. ) 𝜌𝜌 . ) SA T (𝜌𝜌=2.0) Standard
(e) SAAT under different model capacities
Figure 3: Robustness evaluation of SAAT with (a) varied ρ and (b) varied ϵmax; The (c) learning curve and (d) robust generalization gap with different ρ; (e) robustness disparity on networks with different model capacities.
ness. Specifically, we process the easy-to-attack samples and the hard-to-attack samples separately, denoted as SAATdown and SAATup, respectively. For SAATdown, the maximum adversarial budget remains the same as the standard AT, e.g., ϵmax = 8, and the minimal adversarial loss is set to 1.5. The evaluation results are shown in Table 1. As expected, weakening the easy-to-attack samples will make adversarial training inclined to the weak attack defense, which significantly increase the natural accuracy and can maintain the PGD accuracy, but it essentially not only aggravates the robustness disparity, but also degrades the model adversarial robustness (in term of AA). For SAATup, by increasing the maximum adversarial budget and minimal adversarial loss, more model capacity is used to defend against adversarial attacks, which significantly alleviates the robustness disparity of the output network and further enhances the model adversarial robustness. The performance evaluation of SAATup across different datasets, different model structures and different AT methods is provided in Appendix A, where SAATup boosts adversarial robustness under all settings, demonstrating the effectiveness of the proposed method. Note that the robust accuracy of SAATup at ϵ = 8 first increases and then decreases. This is defensible because as ϵ and ρ increase, the robustness disparity of the output model is getting flatter, and the adversarial robustness will also be migrated to a relatively larger perturbation budget. In addition, it can be observed that the performance gap between the last checkpoint and the best checkpoint of SAATup is shrinking, which illustrates that enhancing the hard-to-attack samples can also effectively alleviate robust overfitting.
4.4 PERFORMANCE UNDER DIFFERENT MODEL CAPACITIES
We extend SAAT to networks with different model capacities. Specifically, we perform SAAT on a series of Wide ResNet-34-x networks, where x is 1, 3, 5, and 7 respectively. Note that the larger the x, the larger the model capacity. ρ is fixed at 1.5 and ϵmax is sufficiently large, such
as ϵmax = 128/255. The evaluation results are summarized in Figure 3 (e). It can be observed that the output network can maintain a steady degree of robustness disparity across different model capacities. Such observations exactly reflect the nature of our approach that SAAT is able to adapt to networks of various model capacities.
5 CONCLUSION
We present a strength-adaptive adversarial training (SSAT) method in this paper. The proposed approach distinguish itself from others by using the minimum adversarial loss constraint for generating adversarial training data, which is adaptive to the dynamic training schedule and networks of various model capacities. We show that adversarial training loss is well-correlated with the degree of robustness disparity and robust generalization gap, empirically verify that SAAT can effectively alleviate robustness overfitting, mitigate the robustness disparity of output networks and further enhance the model adversarial robustness by adjusting the tradeoff of adversarial training, demonstrating the effectiveness of the proposed approach. We hope that the robustness disparity we offer can further improve the completeness of adversarial robustness evaluation methods (Carlini et al., 2019) and expect more new techniques to be proposed to achieve learning a fully robust network.
A APPENDIX
In this part, we conduct extended comparative experiments. Specifically, we conduct experiments under different datasets, different model structures and different AT methods. The results are summarized in Table 2 and Table 3. Experimental results show that the proposed method can achieve higher adversarial robustness under all settings, demonstrating the effectiveness of the proposed approach. | 1. What is the focus and contribution of the paper regarding adversarial training?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to adapt to various model capacities and handle robustness overfitting?
3. Do you have any concerns or suggestions regarding the methodology, such as applying the heuristic method to ε directly or conducting additional experiments to demonstrate the advantage of introducing ρ?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, including the need for more baselines and evaluations under the same budget?
5. Are there any other questions or suggestions the reviewer has, such as visualizing perturbations obtained or comparing with related works like Cai et al. and Kim et al.? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper works on adversarial training, a well-known defense for adversarial samples. It aims to address the problem of robustness over-fitting, which refers to the phenomenon that training with a fixed budget degenerates model performance. The authors propose a method called Strength-Adaptive Adversarial Training, which can update the strength of the attacks during the training process.
Strengths And Weaknesses
Strength
The algorithm is fit for networks of various model capacities.
Figure 2 (d) shows robustness differences between the "best" and "last" checkpoints, which is interesting.
Weakness
About motivation: I question the novelty of your motivation that using a dynamic perturbation budget can alleviate robustness over-fitting. In fact, Cai et al. [1] adjust the iteration numbers of PGD to control the adversarial strength. Similarly, Kim et al. [2] directly manipulate the step size of FGSM to overcome the question. At least, you should cite these works in the introduction part of your paper.
About methods: Your work introduces a new parameter
ρ
and uses a heuristic method to adjust it. However, why not apply the same heuristic method to
ϵ
directly? So you can adjust the adversarial budget during training. I hope you can do some additional experiments to demonstrate the advantage of introducing
ρ
.
About methods: You claim that your method can cope with different model capacities. However, I do not see such an experiment in your paper. Could you train some models of different architectures adversarially with the same
ρ
?
About Experiments: Your paper bears a lack of baselines. Please add some evaluation on related works (like [1,2]) to Table 1. Note that you should evaluate these methods under the same budget.
Clarity, Quality, Novelty And Reproducibility
Please increase the size of your images.
Some other questions: Researchers set the perturbation budget to avoid the adversarial samples being recognized by people. I find your method loosens the constraints of the perturbations in Table 1. Can you visualize some perturbations obtained?
[1] Cai, Qi-Zhi, et al. "Curriculum adversarial training." arXiv preprint arXiv:1805.04807 (2018). [2] Kim, Hoki, Woojin Lee, and Jaewook Lee. "Understanding catastrophic overfitting in single-step adversarial training." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 9. 2021. |
ICLR | Title
Strength-Adaptive Adversarial Training
Abstract
Adversarial training (AT) is proved to reliably improve network’s robustness against adversarial data. However, current AT with a pre-specified perturbation budget has limitations in learning a robust network. Firstly, applying a prespecified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network’s desideratum. Secondly, the attack strength of adversarial training data constrained by the pre-specified perturbation budget fails to upgrade as the growth of network robustness, which leads to robust overfitting and further degrades the adversarial robustness. To overcome these limitations, we propose Strength-Adaptive Adversarial Training (SAAT). Specifically, the adversary employs an adversarial loss constraint to generate adversarial training data. Under this constraint, the perturbation budget will be adaptively adjusted according to the training state of adversarial data, which can effectively avoid robust overfitting. Besides, SAAT explicitly constrains the attack strength of training data through the adversarial loss, which manipulates model capacity scheduling during training, and thereby can flexibly control the degree of robustness disparity and adjust the tradeoff between natural accuracy and robustness. Extensive experiments show that our proposal boosts the robustness of adversarial training.
1 INTRODUCTION
Current deep neural networks (DNNs) achieve impressive breakthroughs on a variety of fields such as computer vision (He et al., 2016), speech recognition (Wang et al., 2017), and NLP (Devlin et al., 2018), but it is well-known that DNNs are vulnerable to adversarial data: small perturbations of the input which are imperceptible to humans will cause wrong outputs (Szegedy et al., 2013; Goodfellow et al., 2014). As countermeasures against adversarial data, adversarial training (AT) is a method for hardening networks against adversarial attacks (Madry et al., 2017). AT trains the network using adversarial data that are constrained by a pre-specified perturbation budget, which aims to obtain the output network with the minimum adversarial risk of an sample to be wrongly classified under the same perturbation budget. Across existing defense techniques, AT has been proved to be one of the most effective and reliable methods against adversarial attacks (Athalye et al., 2018).
Although promising to improve the network’s robustness, AT with a pre-specified perturbation budget still has limitations in learning a robust network. Firstly, the pre-specified perturbation budget is inadaptable for networks of various model capacities, yielding divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network’s desideratum. Ideally, for a robust network, perturbing the attack budget within a small range should not cause signifcant accuracy degradation. Unfortunately, the degree of robustness disparity is intractable for AT with a pre-specified perturbation budget. In standard AT, there could be a prominent degree of robustness disparity in output networks. For instance, a standard PGD adversarially-trained PreAct ResNet18 network has 84% natural accuracy and only 46% robust accuracy on CIFAR10 under ℓ∞ threat model, as shown in Figure 1(a). Empirically, we have to increase the pre-specified perturbation budget to allocate more model capacity for defense against adversarial attacks to mitigate the degree of robustness disparity, as shown in Figure 1(b). However, the feasible range of perturbation budget is different for networks with different model capacities. For example, AT with perturbation budget ϵ = 40/255 will make PreAct ResNet-18 optimization collapse, while wide ResNet-34-10 can learn normally. In order to maintain a steady degree of robustness disparity, we have to find separate perturbation budgets for each network with different model capacities. Therefore, it may be pessimistic to use AT with a pre-specified perturbation budget to learn a robust network.
Secondly, the attack strength of adversarial training data constrained by the pre-specified perturbation budget is gradually weakened as the growth of network robustness. During the training process, adversarial training data are generated on the fly and are changed based on the updating of the network. As the the network’s adversarial robustness continues to increase, the attack strength of adversarial training data with the pre-specified perturbation budget is getting relatively weaker. Given the limited network capacity, a degenerate or stagnant adversary accompanied by an evolving network will easily cause training bias: adversarial training is more inclined to the defense against weak strength attacks, and thereby erodes defenses on strong strength attacks, leading to the undesirable robust overfitting, as shown in Figure 1(c). Moreover, compared with the “best” checkpoint in AT with robust overfitting, the “last” checkpoint’s defense advantage in weak strength attack is slight, while its defense disadvantage in strong strength attack is significant, which indicates that robust overfitting not only exacerbates the degree of robustness disparity, but also further degrades the adversarial robustness. Thus, it may be deficient to use adversarial data with a pre-specified perturbation budget to train a robust network.
To overcome these limitations, we propose strength-adaptive adversarial training (SAAT), which employs an adversarial loss constraint to generate adversarial training data. The adversarial perturbation generated under this constraint is adaptive to the dynamic training schedule and networks of various model capacities. Specifically, as adversarial training progresses, a larger perturbation budget is required to satisfy the adversarial loss constraint since the network becomes more robust. Thus, the perturbation budgets in our SAAT is adaptively adjusted according to the training state of adversarial data, which restrains the training bias and effectively avoids robust overfitting. Besides, SAAT explicitly constrains the attack strength of training data by the adversarial loss constraint, which guides model capacity scheduling in adversarial training, and thereby can flexibly adjust the tradeoff between natural accuracy and robustness, ensuring that the output network maintains a steady degree of robustness disparity even under networks with different model capacities.
Our contributions are as follows. (a) In standard AT, we characterize the pessimism of adversary with a pre-specified perturbation budget, which is due to the intractable robustness disparity and undesirable robust overfitting (in Section 3.1). (b) We propose a new adversarial training method, i.e., SAAT (its learning objective in Section 3.2 and its realization in Section 3.3). SAAT is a general adversarial training method that can be easily converted to natural training or standard AT. (c) Empirically, we find that adversarial training loss is well-correlated with the degree of robustness disparity and robust generalization gap (in Section 4.2), which enables our SAAT to overcome the issue of robust overfitting and flexibly adjust the tradeoff of adversarial training, leading to the improved natural accuracy and robustness (in Section 4.3).
2 PRELIMINARY AND RELATED WORK
In this section, we review the adversarial training method and related works.
2.1 ADVERSARIAL TRAINING
Learning objective. Let fθ, X and ℓ be the network f with trainable model parameter θ, input feature space, and loss function, respectively. Given a C-class dataset S = {(xi, yi)}ni=1, where xi ∈ X and yi ∈ Y = {0, 1, ..., C − 1} as its associated label. In natural training, most machine
learning tasks could be formulated as solving the following optimization problem:
min θ
1
n n∑ i=1 ℓ(fθ(xi), yi). (1)
The learning objective of natural training is to obtain the networks that have the minimum empirical risk of a natural input to be wrongly classified. In adversarial training, the adversary adds the adversarial perturbation to each sample, i.e., transform S = {(xi, yi)}ni=1 to S ′ = {(x′i = xi + δi, yi)}ni=1. The adversarial perturbation {δi}ni=1 are constrained by a pre-specified budget, i.e. {δ ∈ ∆ : ||δ||p ≤ ϵ}, where p can be 1, 2,∞, etc. In order to defend such attack, standard adversarial training (AT) (Madry et al., 2017) resort to solve the following objective function:
min θ
1
n n∑ i=1 max δi∈∆ ℓ(fθ(xi + δi), yi). (2)
Note that the outer minimization remains the same as Eq.(1), and the inner maximization operator can also be re-written as
δi = argmax δi∈∆ ℓ(fθ(xi + δi), yi), (3)
where x′i = xi + δi is the most adversarial data within the perturbation budget ∆. Standard AT employs the most adversarial data generated according to Eq.(3) for updating the current model. The learning objective of standard AT is to obtain the networks that have the minimum adversarial risk of a input to be wrongly classified under the pre-specified perturbation budget.
Realizations. The objective functions of standard AT (Eq.(2)) is a composition of an inner maximization problem and an outer minimization problem, with one step generating adversarial data and one step minimizing loss on the generated adversarial data w.r.t. the model parameters θ. For the outer minimization problem, Stochastic Gradient Descent (SGD) (Bottou, 1999) and its variants are widely used to optimize the model parameters (Rice et al., 2020). For the inner maximization problem, the Projected Gradient Descent (PGD) (Madry et al., 2017) is the most common approximation method for generating adversarial perturbation, which can be viewed as a multi-step variant of Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014). Given normal example x ∈ X and step size α > 0, PGD works as follows:
δk+1 = Π∆(α · sign∇xℓ(f(x+ δk), y) + δk), k ∈ N, (4) where δk is adversarial perturbation at step k; and Π∆ is the projection function that project the adversarial perturbation back into the pre-specified budget ∆ if necessary.
2.2 RELATED WORK
Stopping criteria. There are different stopping criteria for PGD-based adversarial training. For example, standard AT (Madry et al., 2017) employs a fixed number of iterations K, namely PGD-K, which is commonly used in many outstanding adversarial training variants, such as TRADES (Zhang et al., 2019), MART (Wang et al., 2019b), and RST (Carmon et al., 2019). Besides, some works have further enhanced the PGD-K method by incorporating additional optimization mechanisms, such as curriculum learning (Cai et al., 2018), FOSC (Wang et al., 2019a), and geometry reweighting (Zhang et al., 2020b). On the other hand, some works adopt different PGD stopping criterion, i.e., misclassification-aware criterion, which stops the iterations once the network misclassifies the adversarial data. This misclassification-aware criterion is widely used in the emerging adversarial training variants, such as FAT (Zhang et al., 2020a), MMA (Ding et al., 2018), IAAT (Balaji et al., 2019), ATES (Sitawarin et al., 2020), and Customized AT (Cheng et al., 2020). Different from these works, we propose strength-adaptive PGD (SA-PGD) that uses the minimum adversarial loss as the stopping criterion to generate efficient adversarial data for adversarial training.
Relationship between accuracy and robustness. Also relevant to this work are works that study the relationship between natural accuracy and robustness. PGD-based AT can enhance the robustness against adversarial data, but degrades the accuracy on the natural data significantly. One popular point is the inevitable tradeoff between robustness and natural accuracy. For example, Tsipras et al. (2018) claimed robustness and natural accuracy might at odds. Su et al. (2018) concluded a linearly negative correlation between the logarithm of natural accuracy and robustness. Zhang et al.
(2019) theoretically characterized the tradeoff. However, human is a network that is both robust and accurate with no tradeoff according to the definition of adversarial perturbation. Some other works also provide evidence that robustness and natural accuracy are not opposing. For example, Stutz et al. (2019) confirmed the existence of adversarial data on the manifold of natural data. Yang et al. (2020) showed benchmark datasets with adversarial perturbation are distributionally separated. Raghunathan et al. (2020) stated that additional unlabeled data help mitigate the tradeoff. Nakkiran (2019) proved that the tradeoff is due to the insufficient network expression ability. A separate but related line of works also has challenged the tradeoff by improving the natural accuracy while maintaining the robustness (Zhang et al., 2020a) or retaining the natural accuracy while improving the robustness (Zhang et al., 2020b). However, these works use PGD as the robustness evaluation, which is not always successful since these networks can be defeated by stronger attacks (Liu et al., 2021). In this work, we combine AutoAttack (Croce & Hein, 2020b), a stronger and more reliable robustness evaluation method, to conduct a more comprehensive evaluation of AT’s tradeoff.
3 STRENGTH-ADAPTIVE ADVERSARIAL TRAINING
In this section, we introduce the proposed strength-adaptive adversarial training (SAAT) and its learning objective as well as algorithmic realization.
3.1 MOTIVATIONS OF SAAT
Robustness disparity is intractable for AT with a pre-specified perturbation budget. For adversarially-trained networks, it is widely recognized that robust accuracy should be lower than the natural accuracy. Nevertheless, the degree of robustness disparity between natural and robust accuracy is often overlooked. Ideally, for a fully robust network, the robust accuracy and natural accuracy should be very close. The technique of maintaining a steady degree of robustness disparity is therefore critical for learning a robust network. However, current AT methods typically employ a pre-specified perturbation budget to generate adversarial data, whose attack strength is relative to the network’s model capacity, which fails to yield a steady degree of robustness disparity across different networks, as shown in Figure 2(a). For each network, we further numerically compute their degree of robustness disparity (RD): RD = 1n ∑n i=1 A0−Ai ϵi
, where Ai represents the accuracy on the perturbation budget ϵi. As shown by the statistical results in Figure 2(b), when the perturbation budget is fixed, the degree of robustness disparity becomes more prominent as the model capacity increases, which is not robust network’s desideratum. Thus, to maintain a steady degree of robustness disparity in adversarial training, we should open up perturbation budget constraints and flexibly adjust the attack strength of training data according to the network’s model capacity.
Robust overfitting degrades the network’s adversarial robustness. AT employs the most adversarial data to reduce the sensitivity of the network’s output w.r.t. adversarial perturbation of the natural data. However, during the training process, adversarial training data is generated on the fly and is getting weaker for networks with increasing adversarial robustness. Given a certain amount of allocatable model capacity, the adversarial training data with a pre-specified perturbation budget will inevitably induce training bias, which eventually leads to the robust overfitting. As shown in Figure 2(c), it is observed that when the perturbation budget is fixed, robust overfitting occurs as the network’s model capacity increases. We further compare the adversarial robustness between the “best” and “last” checkpoints in AT without robust overfitting and with robust overfitting. As shown in Figure 2(d), it can be seen that robust overfitting significantly degrades the network’s adversarial robustness. Therefore, to avoid robust overfitting in adversarial training, we should flexibly adjust the attack strength of adversarial training data to adapt to the dynamic training schedule.
3.2 LEARNING OBJECTIVE OF SAAT
Let ρ be the specified minimum adversarial loss constraint to adversarial training data. In the learning objective of SAAT, the outer minimization for optimizing model parameters still follows Eq.(2) or Eq.(1). However, instead of generating adversarial perturbation δ via inner maximization, we generate δ as follows:
δi = argmin δi
ℓ(fθ(xi + δi), yi) s.t. ℓ(fθ(xi + δi), yi) ≥ ρ. (5)
Note that the operator argmax in Eq.(3) is replaced with argmin here, and there is no explicit perturbation budget constraint for δ. Instead, we adopt the magnitude of adversarial loss to constrain the generation of adversarial perturbation. The constraint ensures that the loss of adversarial training data is greater than the specified minimum adversarial loss ρ. Among all such δ satisfying the constraint, we select the one minimizing ℓ(fθ(xi + δi), yi). In terms of the process of generating adversarial perturbations, Eq.(5) could be regarded as an adaptive adversary, since adversarial loss is related to the training schedule and network’s model capacity. For example, in the early stages of training, adversarial attack may be needless to generate qualified adversarial training data. However, in the later stages of training, more effort is required to generate the corresponding adversarial training data because the network is more robust. On the other hand, in terms of attack strength of training data, Eq.(5) is actually an attack strength-fixed adversary, since the adversarial loss of training data is constrained by a fixed ρ.
The learning objective of SAAT is to obtain the networks with a steady degree of robustness disparity, which is achieved by optimizing model using adversarial training data with a fixed attack strength (in terms of the adversarial loss ρ). ρ is used to guide model capacity scheduling during adversarial training, so as to ensure that the output network maintains a steady degree of robustness disparity.
Relation with natural training and standard AT. Notice that the learning objective of SAAT is extremely general. When ρ ≤ 0, SAAT is equivalent to natural training (refer to Eq.(1), then all training data does not need adversarial perturbations, so that most of the model capacity will be used to learn natural data. When ρ → ∞, SAAT is equivalent to standard AT (refer to Eq.(2), then all training data is the most adversarial data, so that a large amount of model capacity will be used for enhancing adversarial robustness (depending on the maximal perturbation budget). When 0 < ρ < ∞, SAAT lies in the middle of natural training and standard AT, which can manipulate the model capacity scheduling during the training phase, so as to obtain the output network with multiple alternative forms of adversarial robustness for various practical needs. Eq.(5) recovers both natural training and standard AT, thus it is a more general learning objective of adversarial training.
3.3 REALIZATION OF SAAT
The learning objective of SAAT implies the optimization of an adversarially robust networks with a steady degree of robustness disparity, with one step generating qualified adversarial training data and one step minimizing the adversarial loss w.r.t. the model parameters. Specifically, we search for qualified adversarial training data by adjusting the perturbation budget and attack step. For instance, given a perturbation budget, we perform sufficient PGD attacks within this budget. If the most adversarial data still does not satisfy the constraint of Eq.(5), we will increase the perturbation budget and conduct further PGD attacks until we find adversarial data that satisfies the minimum adversarial loss criterion.
How to estimate the optimal perturbation budget is an open question. Here we heuristically design a simple implementation: progressive search. Specifically, the perturbation budgets of training data are initially set to 0, and then their perturbation budgets are increased stepwise (e.g., increments of step size τ ). Each time the perturbation budgets are updated, it is followed by K iterations of PGD attacks until the generated adversarial data satisfies the minimum adversarial loss criterion. The limitation of this implementation is that setting the initial perturbation budget to 0 increases the
Algorithm 1 Strength-Adaptive PGD (SA-PGD) 1: Input: data x ∈ X , label y ∈ Y , model f , loss function ℓ, maximum perturbation budget ϵmax, minimum
adversarial loss ρ, perturbation budget ϵ, perturbation budget step size τ , SA-PGD step K, SA-PGD step size α
2: Output: x′ 3: x′ ← x; ϵ← 0 4: if ℓ(f(x′), y) ≥ ρ then 5: break 6: else 7: while ϵ < ϵmax do 8: ϵ← ϵ+ τ 9: for k = 1, ...,K do
10: x′ ← Πϵ(α · sign(∇x′ℓ(f(x′), y)) + x′) 11: if ℓ(f(x′), y) ≥ ρ then 12: return x′ 13: end if 14: end for 15: end while 16: end if
Algorithm 2 Strength-Adaptive Adversarial Training (SAAT) 1: Input: network fθ , training dataset S = {(xi, yi)}ni=1, learning rate η, number of epochs T , batch size
m, number of batches M 2: Output: adversarially robust network fθ with a target degree of robustness disparity 3: for epoch = 1, ..., T do 4: for mini-batch = 1, ...,M do 5: Sample a mini-batch {(xi, yi)}mi=1 from S 6: for i = 1, ...,m (in parallel) do 7: Obtain adversarial data x′i of xi by Algorithm 1 8: end for 9: θ ← θ − η 1
m ∑m i=1∇θℓ(fθ(x ′ i), yi)
10: end for 11: end for
computational cost and slows the speed, even though it ensures that the optimal perturbation budget will be estimated.
A maximally allowed perturbation budget (ϵmax) is introduced: we observe that even with the largest perturbation (e.g., ϵ = 255/255), there are still some examples (outliers) that fail to satisfy the minimum adversarial loss constraint. Considering that the pixel values are strictly sampled in [0, 255/255], it is necessary to introduce a maximum perturbation budget to avoid infinite loops.
Algorithm 1 is our strength-adaptive PGD method (SA-PGD), which returns the generated adversarial training data. Algorithm 2 is the proposed strength-adaptive adversarial training (SAAT). SAAT leverages Algorithm 1 for obtaining the qualified adversarial data to optimize the model parameters.
4 EXPERIMENTS
In this section, we conduct comprehensive experiments to evaluate the effectiveness of SAAT including its experimental setup (in Section 4.1), algorithm analysis (in Section 4.2), robustness evaluation (in Section 4.3), and the performance under different model capacities (in Section 4.4).
4.1 EXPERIMENTAL SETUP
Our code is implemented on the open source PyTorch framework with a single NVIDIA A100SXM4-40GB GPU. The code as well as related models will be released for public use and verification. We conduct experiments on ℓ∞ threat model and follow the hyper-parameter setting of Rice et al. (2020) for a fair comparison with the state-of-the-art AT methods. For training, the network is trained for 200 epochs using SGD with momentum 0.9, weight decay 5 × 10−4, and an initial learning rate of 0.1. The learning rate is divided by 10 at the 100-th and 150-th epoch, re-
spectively. Conventional data augmentation including random crops with 4 pixels of padding and random horizontal flips are applied. For adversary, we use SA-PGD (Algorithm 1) to generate adversarial training data. The step size α = 2/255 is used following standard PGD (Madry et al., 2017). We adopt the perturbation budget step size τ consistent with the SA-PGD step size α, such as τ = 2/255, and SA-PGD step K = 3, which is to make the adversarial data sufficiently attacked under the updated perturbation budget. For robustness evaluation, the output model is tested under a series of adversaries, including natural data, PGD (Madry et al., 2017), and Auto Attack (Croce & Hein, 2020b). Among them, natural accuracy and PGD accuracy can intuitively reflect the network’s robustness disparity. And AA is an ensemble of complementary attacks, consisting of three white-box attacks (APGD-CE (Croce & Hein, 2020b), APGD-DLR (Croce & Hein, 2020b), and FAB (Croce & Hein, 2020a)) and a black-box attack (Square Attack (Andriushchenko et al., 2020)). AA regards networks to be robust only if the model correctly classify adversarial data with all types of attacks, which is among the most reliable evaluation of adversarial robustness to date.
4.2 ANALYSIS OF THE PROPOSED ALGORITHM
We delve into SAAT to investigate its each component, including the role of minimum adversarial loss ρ and the impact of maximum perturbation budget ϵmax. All analysis experiments are conducted using PreAct ResNet-18 (He et al., 2016) on CIFAR10 dataset (Krizhevsky et al., 2009).
The role of minimum adversarial loss ρ. We empirically investigate the role of minimum adversarial loss by using different ρ to generate adversarial training data, where ϵmax is assigned a sufficiently large value, such as ϵmax = 128/255. The minimum adversarial loss ρ for SAAT varies from 0 to 2.2, and the evaluation results are summarized in Figure 3 (a). It is observed that the model’s degree of robustness disparity is well-correlated with the adversarial training loss ρ. When ρ = 0, there is a large performance gap between the natural accuracy and robust accuracy. This performance gap keeps decreasing as ρ increases. And when ρ = 2.2, the robust accuracy is almost the same as the natural accuracy. Note that SAAT fails to converge when ρ > 2.2, which might be explained by the fact that robust accuracy should be lower than natural accuracy for any adversarially-trained networks. The clear correlation between the minimum training loss and degree of robustness disparity enables our SAAT to flexibly control the performance gap in output networks. Moreover, we observed that ρ is also closely related to the robust generalization gap. The learning cruves of SAAT with different ρ is shown in Figure 3 (c), and their robust generalization gap is summarized in Figure 3 (d). It can be seen that the robust accuracy is always in sync with the natural accuracy during the training phase and there is no significant robustness degradation as in Figure 1(c). When ρ = 1.5, the robust generalization gap is already very small.
The impact of maximum perturbation budget ϵmax. We further investigate the impact of introduced maximum perturbation budget, by comparing the robustness performance of models trained using different ϵmax. Given a fixed ρ, such as ρ = 1.5, the value of ϵmax varies from 0 to 128/255, and the evaluation results are summarized in Figure 3 (b). It can be observed that when ϵmax is small, increasing ϵmax leads to a flatter degree of robustness disparity, which indicates that more model capacity is allocated to defend against adversarial attacks. As expected, when ϵmax is greater than a certain value, the degree of robustness disparity begins to maintain a plateau, which infers that most of the adversarial training data already met the minimum adversarial loss constraint, so the network tends to maintain steady robustness disparity bound by ρ. ϵmax adjusts the robustness disparity within the plateau constrained by ρ, which reflects a tradeoff between the natural accuracy and adversarial robustness and suggests that our ϵmax helps adjust this tradeoff. Notably, the observed tradeoff is not inherent in adversarial training but a consequence of model capacity scheduling. Note that the size of ϵmax is also influenced by minimum adversarial loss ρ. In the following section, we fine-tune both ϵmax and ρ for the robustness evaluation of SAAT.
4.3 ROBUSTNESS EVALUATION
Compared with the standard AT, the difference of SAAT mainly lies in weakening the easy-to-attack samples (the adversarial loss is higher than the minimal adversarial loss) and enhancing the hardto-attack samples (the adversarial loss is lower than the minimal adversarial loss). In this part, we investigate their respective effects on network adversarial robustness on two classic baselines: AT and AWP, where AWP can suppress robust overfitting and achieve state-of-the-art adversarial robust-
0
10
20
30
40
50
60
70
80
90
100
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget (*/255)
𝝆𝝆 0.0
2.2
(a) Role of minimum adversarial loss
0
10
20
30
40
50
60
70
80
90
100
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget (*/255)
𝝐𝝐𝒎𝒎𝒎𝒎𝒎𝒎 0
128/255
(b) Impact of maximum perturbation budget
0
10
20
30
40
50
60
70
80
90
100
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
10
20
30
40
50
60
70
80
90
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch
Natural training Adversarial training Natural test Adversarial test
0
10
20
30
40
50
60
70
80
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
0
5
10
15
20
25
30
35
40
45
50
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
(c) Learning curves of SAAT with ρ of 0.5, 1.0, 1.5 and 2.0 (from left to right)
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34-1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100 150 200
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SAAT (𝜌𝜌=0.5) SAAT (𝜌𝜌=1.0) SAAT (𝜌𝜌=1.5) SAAT (𝜌𝜌=2.0) Standard AT
(d) Robust generalization gap
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34-1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100 150 200
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SAAT (𝜌𝜌=0.5) SAAT (𝜌𝜌=1.0) SAAT (𝜌𝜌=1.5) SAAT (𝜌𝜌=2.0) Standard AT
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 2 4 6 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34 1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SA T (𝜌𝜌=0.5) SAAT (𝜌𝜌=1. ) 𝜌𝜌 . ) SA T (𝜌𝜌=2.0) Standard
(e) SAAT under different model capacities
Figure 3: Robustness evaluation of SAAT with (a) varied ρ and (b) varied ϵmax; The (c) learning curve and (d) robust generalization gap with different ρ; (e) robustness disparity on networks with different model capacities.
ness. Specifically, we process the easy-to-attack samples and the hard-to-attack samples separately, denoted as SAATdown and SAATup, respectively. For SAATdown, the maximum adversarial budget remains the same as the standard AT, e.g., ϵmax = 8, and the minimal adversarial loss is set to 1.5. The evaluation results are shown in Table 1. As expected, weakening the easy-to-attack samples will make adversarial training inclined to the weak attack defense, which significantly increase the natural accuracy and can maintain the PGD accuracy, but it essentially not only aggravates the robustness disparity, but also degrades the model adversarial robustness (in term of AA). For SAATup, by increasing the maximum adversarial budget and minimal adversarial loss, more model capacity is used to defend against adversarial attacks, which significantly alleviates the robustness disparity of the output network and further enhances the model adversarial robustness. The performance evaluation of SAATup across different datasets, different model structures and different AT methods is provided in Appendix A, where SAATup boosts adversarial robustness under all settings, demonstrating the effectiveness of the proposed method. Note that the robust accuracy of SAATup at ϵ = 8 first increases and then decreases. This is defensible because as ϵ and ρ increase, the robustness disparity of the output model is getting flatter, and the adversarial robustness will also be migrated to a relatively larger perturbation budget. In addition, it can be observed that the performance gap between the last checkpoint and the best checkpoint of SAATup is shrinking, which illustrates that enhancing the hard-to-attack samples can also effectively alleviate robust overfitting.
4.4 PERFORMANCE UNDER DIFFERENT MODEL CAPACITIES
We extend SAAT to networks with different model capacities. Specifically, we perform SAAT on a series of Wide ResNet-34-x networks, where x is 1, 3, 5, and 7 respectively. Note that the larger the x, the larger the model capacity. ρ is fixed at 1.5 and ϵmax is sufficiently large, such
as ϵmax = 128/255. The evaluation results are summarized in Figure 3 (e). It can be observed that the output network can maintain a steady degree of robustness disparity across different model capacities. Such observations exactly reflect the nature of our approach that SAAT is able to adapt to networks of various model capacities.
5 CONCLUSION
We present a strength-adaptive adversarial training (SSAT) method in this paper. The proposed approach distinguish itself from others by using the minimum adversarial loss constraint for generating adversarial training data, which is adaptive to the dynamic training schedule and networks of various model capacities. We show that adversarial training loss is well-correlated with the degree of robustness disparity and robust generalization gap, empirically verify that SAAT can effectively alleviate robustness overfitting, mitigate the robustness disparity of output networks and further enhance the model adversarial robustness by adjusting the tradeoff of adversarial training, demonstrating the effectiveness of the proposed approach. We hope that the robustness disparity we offer can further improve the completeness of adversarial robustness evaluation methods (Carlini et al., 2019) and expect more new techniques to be proposed to achieve learning a fully robust network.
A APPENDIX
In this part, we conduct extended comparative experiments. Specifically, we conduct experiments under different datasets, different model structures and different AT methods. The results are summarized in Table 2 and Table 3. Experimental results show that the proposed method can achieve higher adversarial robustness under all settings, demonstrating the effectiveness of the proposed approach. | 1. What is the focus of the paper regarding adversarial training?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and effectiveness?
3. Do you have any concerns or questions about the pseudo-codes or the adaptive attack budget mechanism?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the proposed method or expanding the experimental scope? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes Strength-Adaptive Adversarial Training (SAAT) to improve adversarial training. Specifically, SAAT adopts a dynamic and adaptive strategy to control the adversarial attack budget. Instead of setting a fixed adversarial budget, this paper proposes to enlarge the budget while attacking. The proposed SAAT is evaluated on CIFAR-10 with Resnet-18. Some marginal improvements are observed.
Strengths And Weaknesses
Strengths:
The proposed idea is very straightforward and clean, although I have some questions regarding the proposed pseudo-codes.
Experimental results show that SAAT can achieve consistent yet marginal improvements over vanilla adversarial training and AWP.
Weaknesses:
Adopting adaptive attacking budget during adversarial training has been highlighted by many previous work. The proposed SAAT is just a simple early stop (loss value) based on some empirical observations, which in my personal view is not novel.
In Alg.1, considering that step size
α
is the same as perturbation budget step size
τ
(
2
/
255
)
, what is the meaning of enlarging epsilon by
τ
every attacking step? The real adversarial perturbation will be bounded by the step size (if ignoring [min, max] clipping). So in my view, the adaptive attack budget is just the normal PGD using
ϵ
=
ϵ
m
a
x
. Besides, the break in Line 12 only jumps up of the inner for-loop. Does this mean that attacking will continue after enlarging the epsilon for this adv. example whose loss has already been greater than
ρ
?
By setting SA-PGD step K = 3, the proposed SAAT might adopt much more attack steps to train their models. For example, with
ϵ
m
a
x
= 8/255,
τ
= 2/255, the actual attack steps might be at most
ϵ
/
T
∗
K
=
12
. When
ϵ
m
a
x
= 14/255 (common setting in this paper’s experiments), the attack steps might be at most 21. A lot of computational costs will be introduced in SAAT.
The proposed SAAT doesn’t show any improvements under common adversarial evaluation settings (eps=8/255) and only marginal improvements can be observed on vanilla adversarial training and AWP under very specific settings. Besides, SAAT hurts clean accuracy very much. Only experiments on CIFAR-10 and Resnet-18 are provided. The authors should conduct experiments on other architectures with larger capacities and benchmarks (CIFAR-100).
Clarity, Quality, Novelty And Reproducibility
This paper is well-written and clear. The proposed method is very naive. This paper sounds technical and can be reproduced. |
ICLR | Title
Strength-Adaptive Adversarial Training
Abstract
Adversarial training (AT) is proved to reliably improve network’s robustness against adversarial data. However, current AT with a pre-specified perturbation budget has limitations in learning a robust network. Firstly, applying a prespecified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network’s desideratum. Secondly, the attack strength of adversarial training data constrained by the pre-specified perturbation budget fails to upgrade as the growth of network robustness, which leads to robust overfitting and further degrades the adversarial robustness. To overcome these limitations, we propose Strength-Adaptive Adversarial Training (SAAT). Specifically, the adversary employs an adversarial loss constraint to generate adversarial training data. Under this constraint, the perturbation budget will be adaptively adjusted according to the training state of adversarial data, which can effectively avoid robust overfitting. Besides, SAAT explicitly constrains the attack strength of training data through the adversarial loss, which manipulates model capacity scheduling during training, and thereby can flexibly control the degree of robustness disparity and adjust the tradeoff between natural accuracy and robustness. Extensive experiments show that our proposal boosts the robustness of adversarial training.
1 INTRODUCTION
Current deep neural networks (DNNs) achieve impressive breakthroughs on a variety of fields such as computer vision (He et al., 2016), speech recognition (Wang et al., 2017), and NLP (Devlin et al., 2018), but it is well-known that DNNs are vulnerable to adversarial data: small perturbations of the input which are imperceptible to humans will cause wrong outputs (Szegedy et al., 2013; Goodfellow et al., 2014). As countermeasures against adversarial data, adversarial training (AT) is a method for hardening networks against adversarial attacks (Madry et al., 2017). AT trains the network using adversarial data that are constrained by a pre-specified perturbation budget, which aims to obtain the output network with the minimum adversarial risk of an sample to be wrongly classified under the same perturbation budget. Across existing defense techniques, AT has been proved to be one of the most effective and reliable methods against adversarial attacks (Athalye et al., 2018).
Although promising to improve the network’s robustness, AT with a pre-specified perturbation budget still has limitations in learning a robust network. Firstly, the pre-specified perturbation budget is inadaptable for networks of various model capacities, yielding divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network’s desideratum. Ideally, for a robust network, perturbing the attack budget within a small range should not cause signifcant accuracy degradation. Unfortunately, the degree of robustness disparity is intractable for AT with a pre-specified perturbation budget. In standard AT, there could be a prominent degree of robustness disparity in output networks. For instance, a standard PGD adversarially-trained PreAct ResNet18 network has 84% natural accuracy and only 46% robust accuracy on CIFAR10 under ℓ∞ threat model, as shown in Figure 1(a). Empirically, we have to increase the pre-specified perturbation budget to allocate more model capacity for defense against adversarial attacks to mitigate the degree of robustness disparity, as shown in Figure 1(b). However, the feasible range of perturbation budget is different for networks with different model capacities. For example, AT with perturbation budget ϵ = 40/255 will make PreAct ResNet-18 optimization collapse, while wide ResNet-34-10 can learn normally. In order to maintain a steady degree of robustness disparity, we have to find separate perturbation budgets for each network with different model capacities. Therefore, it may be pessimistic to use AT with a pre-specified perturbation budget to learn a robust network.
Secondly, the attack strength of adversarial training data constrained by the pre-specified perturbation budget is gradually weakened as the growth of network robustness. During the training process, adversarial training data are generated on the fly and are changed based on the updating of the network. As the the network’s adversarial robustness continues to increase, the attack strength of adversarial training data with the pre-specified perturbation budget is getting relatively weaker. Given the limited network capacity, a degenerate or stagnant adversary accompanied by an evolving network will easily cause training bias: adversarial training is more inclined to the defense against weak strength attacks, and thereby erodes defenses on strong strength attacks, leading to the undesirable robust overfitting, as shown in Figure 1(c). Moreover, compared with the “best” checkpoint in AT with robust overfitting, the “last” checkpoint’s defense advantage in weak strength attack is slight, while its defense disadvantage in strong strength attack is significant, which indicates that robust overfitting not only exacerbates the degree of robustness disparity, but also further degrades the adversarial robustness. Thus, it may be deficient to use adversarial data with a pre-specified perturbation budget to train a robust network.
To overcome these limitations, we propose strength-adaptive adversarial training (SAAT), which employs an adversarial loss constraint to generate adversarial training data. The adversarial perturbation generated under this constraint is adaptive to the dynamic training schedule and networks of various model capacities. Specifically, as adversarial training progresses, a larger perturbation budget is required to satisfy the adversarial loss constraint since the network becomes more robust. Thus, the perturbation budgets in our SAAT is adaptively adjusted according to the training state of adversarial data, which restrains the training bias and effectively avoids robust overfitting. Besides, SAAT explicitly constrains the attack strength of training data by the adversarial loss constraint, which guides model capacity scheduling in adversarial training, and thereby can flexibly adjust the tradeoff between natural accuracy and robustness, ensuring that the output network maintains a steady degree of robustness disparity even under networks with different model capacities.
Our contributions are as follows. (a) In standard AT, we characterize the pessimism of adversary with a pre-specified perturbation budget, which is due to the intractable robustness disparity and undesirable robust overfitting (in Section 3.1). (b) We propose a new adversarial training method, i.e., SAAT (its learning objective in Section 3.2 and its realization in Section 3.3). SAAT is a general adversarial training method that can be easily converted to natural training or standard AT. (c) Empirically, we find that adversarial training loss is well-correlated with the degree of robustness disparity and robust generalization gap (in Section 4.2), which enables our SAAT to overcome the issue of robust overfitting and flexibly adjust the tradeoff of adversarial training, leading to the improved natural accuracy and robustness (in Section 4.3).
2 PRELIMINARY AND RELATED WORK
In this section, we review the adversarial training method and related works.
2.1 ADVERSARIAL TRAINING
Learning objective. Let fθ, X and ℓ be the network f with trainable model parameter θ, input feature space, and loss function, respectively. Given a C-class dataset S = {(xi, yi)}ni=1, where xi ∈ X and yi ∈ Y = {0, 1, ..., C − 1} as its associated label. In natural training, most machine
learning tasks could be formulated as solving the following optimization problem:
min θ
1
n n∑ i=1 ℓ(fθ(xi), yi). (1)
The learning objective of natural training is to obtain the networks that have the minimum empirical risk of a natural input to be wrongly classified. In adversarial training, the adversary adds the adversarial perturbation to each sample, i.e., transform S = {(xi, yi)}ni=1 to S ′ = {(x′i = xi + δi, yi)}ni=1. The adversarial perturbation {δi}ni=1 are constrained by a pre-specified budget, i.e. {δ ∈ ∆ : ||δ||p ≤ ϵ}, where p can be 1, 2,∞, etc. In order to defend such attack, standard adversarial training (AT) (Madry et al., 2017) resort to solve the following objective function:
min θ
1
n n∑ i=1 max δi∈∆ ℓ(fθ(xi + δi), yi). (2)
Note that the outer minimization remains the same as Eq.(1), and the inner maximization operator can also be re-written as
δi = argmax δi∈∆ ℓ(fθ(xi + δi), yi), (3)
where x′i = xi + δi is the most adversarial data within the perturbation budget ∆. Standard AT employs the most adversarial data generated according to Eq.(3) for updating the current model. The learning objective of standard AT is to obtain the networks that have the minimum adversarial risk of a input to be wrongly classified under the pre-specified perturbation budget.
Realizations. The objective functions of standard AT (Eq.(2)) is a composition of an inner maximization problem and an outer minimization problem, with one step generating adversarial data and one step minimizing loss on the generated adversarial data w.r.t. the model parameters θ. For the outer minimization problem, Stochastic Gradient Descent (SGD) (Bottou, 1999) and its variants are widely used to optimize the model parameters (Rice et al., 2020). For the inner maximization problem, the Projected Gradient Descent (PGD) (Madry et al., 2017) is the most common approximation method for generating adversarial perturbation, which can be viewed as a multi-step variant of Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2014). Given normal example x ∈ X and step size α > 0, PGD works as follows:
δk+1 = Π∆(α · sign∇xℓ(f(x+ δk), y) + δk), k ∈ N, (4) where δk is adversarial perturbation at step k; and Π∆ is the projection function that project the adversarial perturbation back into the pre-specified budget ∆ if necessary.
2.2 RELATED WORK
Stopping criteria. There are different stopping criteria for PGD-based adversarial training. For example, standard AT (Madry et al., 2017) employs a fixed number of iterations K, namely PGD-K, which is commonly used in many outstanding adversarial training variants, such as TRADES (Zhang et al., 2019), MART (Wang et al., 2019b), and RST (Carmon et al., 2019). Besides, some works have further enhanced the PGD-K method by incorporating additional optimization mechanisms, such as curriculum learning (Cai et al., 2018), FOSC (Wang et al., 2019a), and geometry reweighting (Zhang et al., 2020b). On the other hand, some works adopt different PGD stopping criterion, i.e., misclassification-aware criterion, which stops the iterations once the network misclassifies the adversarial data. This misclassification-aware criterion is widely used in the emerging adversarial training variants, such as FAT (Zhang et al., 2020a), MMA (Ding et al., 2018), IAAT (Balaji et al., 2019), ATES (Sitawarin et al., 2020), and Customized AT (Cheng et al., 2020). Different from these works, we propose strength-adaptive PGD (SA-PGD) that uses the minimum adversarial loss as the stopping criterion to generate efficient adversarial data for adversarial training.
Relationship between accuracy and robustness. Also relevant to this work are works that study the relationship between natural accuracy and robustness. PGD-based AT can enhance the robustness against adversarial data, but degrades the accuracy on the natural data significantly. One popular point is the inevitable tradeoff between robustness and natural accuracy. For example, Tsipras et al. (2018) claimed robustness and natural accuracy might at odds. Su et al. (2018) concluded a linearly negative correlation between the logarithm of natural accuracy and robustness. Zhang et al.
(2019) theoretically characterized the tradeoff. However, human is a network that is both robust and accurate with no tradeoff according to the definition of adversarial perturbation. Some other works also provide evidence that robustness and natural accuracy are not opposing. For example, Stutz et al. (2019) confirmed the existence of adversarial data on the manifold of natural data. Yang et al. (2020) showed benchmark datasets with adversarial perturbation are distributionally separated. Raghunathan et al. (2020) stated that additional unlabeled data help mitigate the tradeoff. Nakkiran (2019) proved that the tradeoff is due to the insufficient network expression ability. A separate but related line of works also has challenged the tradeoff by improving the natural accuracy while maintaining the robustness (Zhang et al., 2020a) or retaining the natural accuracy while improving the robustness (Zhang et al., 2020b). However, these works use PGD as the robustness evaluation, which is not always successful since these networks can be defeated by stronger attacks (Liu et al., 2021). In this work, we combine AutoAttack (Croce & Hein, 2020b), a stronger and more reliable robustness evaluation method, to conduct a more comprehensive evaluation of AT’s tradeoff.
3 STRENGTH-ADAPTIVE ADVERSARIAL TRAINING
In this section, we introduce the proposed strength-adaptive adversarial training (SAAT) and its learning objective as well as algorithmic realization.
3.1 MOTIVATIONS OF SAAT
Robustness disparity is intractable for AT with a pre-specified perturbation budget. For adversarially-trained networks, it is widely recognized that robust accuracy should be lower than the natural accuracy. Nevertheless, the degree of robustness disparity between natural and robust accuracy is often overlooked. Ideally, for a fully robust network, the robust accuracy and natural accuracy should be very close. The technique of maintaining a steady degree of robustness disparity is therefore critical for learning a robust network. However, current AT methods typically employ a pre-specified perturbation budget to generate adversarial data, whose attack strength is relative to the network’s model capacity, which fails to yield a steady degree of robustness disparity across different networks, as shown in Figure 2(a). For each network, we further numerically compute their degree of robustness disparity (RD): RD = 1n ∑n i=1 A0−Ai ϵi
, where Ai represents the accuracy on the perturbation budget ϵi. As shown by the statistical results in Figure 2(b), when the perturbation budget is fixed, the degree of robustness disparity becomes more prominent as the model capacity increases, which is not robust network’s desideratum. Thus, to maintain a steady degree of robustness disparity in adversarial training, we should open up perturbation budget constraints and flexibly adjust the attack strength of training data according to the network’s model capacity.
Robust overfitting degrades the network’s adversarial robustness. AT employs the most adversarial data to reduce the sensitivity of the network’s output w.r.t. adversarial perturbation of the natural data. However, during the training process, adversarial training data is generated on the fly and is getting weaker for networks with increasing adversarial robustness. Given a certain amount of allocatable model capacity, the adversarial training data with a pre-specified perturbation budget will inevitably induce training bias, which eventually leads to the robust overfitting. As shown in Figure 2(c), it is observed that when the perturbation budget is fixed, robust overfitting occurs as the network’s model capacity increases. We further compare the adversarial robustness between the “best” and “last” checkpoints in AT without robust overfitting and with robust overfitting. As shown in Figure 2(d), it can be seen that robust overfitting significantly degrades the network’s adversarial robustness. Therefore, to avoid robust overfitting in adversarial training, we should flexibly adjust the attack strength of adversarial training data to adapt to the dynamic training schedule.
3.2 LEARNING OBJECTIVE OF SAAT
Let ρ be the specified minimum adversarial loss constraint to adversarial training data. In the learning objective of SAAT, the outer minimization for optimizing model parameters still follows Eq.(2) or Eq.(1). However, instead of generating adversarial perturbation δ via inner maximization, we generate δ as follows:
δi = argmin δi
ℓ(fθ(xi + δi), yi) s.t. ℓ(fθ(xi + δi), yi) ≥ ρ. (5)
Note that the operator argmax in Eq.(3) is replaced with argmin here, and there is no explicit perturbation budget constraint for δ. Instead, we adopt the magnitude of adversarial loss to constrain the generation of adversarial perturbation. The constraint ensures that the loss of adversarial training data is greater than the specified minimum adversarial loss ρ. Among all such δ satisfying the constraint, we select the one minimizing ℓ(fθ(xi + δi), yi). In terms of the process of generating adversarial perturbations, Eq.(5) could be regarded as an adaptive adversary, since adversarial loss is related to the training schedule and network’s model capacity. For example, in the early stages of training, adversarial attack may be needless to generate qualified adversarial training data. However, in the later stages of training, more effort is required to generate the corresponding adversarial training data because the network is more robust. On the other hand, in terms of attack strength of training data, Eq.(5) is actually an attack strength-fixed adversary, since the adversarial loss of training data is constrained by a fixed ρ.
The learning objective of SAAT is to obtain the networks with a steady degree of robustness disparity, which is achieved by optimizing model using adversarial training data with a fixed attack strength (in terms of the adversarial loss ρ). ρ is used to guide model capacity scheduling during adversarial training, so as to ensure that the output network maintains a steady degree of robustness disparity.
Relation with natural training and standard AT. Notice that the learning objective of SAAT is extremely general. When ρ ≤ 0, SAAT is equivalent to natural training (refer to Eq.(1), then all training data does not need adversarial perturbations, so that most of the model capacity will be used to learn natural data. When ρ → ∞, SAAT is equivalent to standard AT (refer to Eq.(2), then all training data is the most adversarial data, so that a large amount of model capacity will be used for enhancing adversarial robustness (depending on the maximal perturbation budget). When 0 < ρ < ∞, SAAT lies in the middle of natural training and standard AT, which can manipulate the model capacity scheduling during the training phase, so as to obtain the output network with multiple alternative forms of adversarial robustness for various practical needs. Eq.(5) recovers both natural training and standard AT, thus it is a more general learning objective of adversarial training.
3.3 REALIZATION OF SAAT
The learning objective of SAAT implies the optimization of an adversarially robust networks with a steady degree of robustness disparity, with one step generating qualified adversarial training data and one step minimizing the adversarial loss w.r.t. the model parameters. Specifically, we search for qualified adversarial training data by adjusting the perturbation budget and attack step. For instance, given a perturbation budget, we perform sufficient PGD attacks within this budget. If the most adversarial data still does not satisfy the constraint of Eq.(5), we will increase the perturbation budget and conduct further PGD attacks until we find adversarial data that satisfies the minimum adversarial loss criterion.
How to estimate the optimal perturbation budget is an open question. Here we heuristically design a simple implementation: progressive search. Specifically, the perturbation budgets of training data are initially set to 0, and then their perturbation budgets are increased stepwise (e.g., increments of step size τ ). Each time the perturbation budgets are updated, it is followed by K iterations of PGD attacks until the generated adversarial data satisfies the minimum adversarial loss criterion. The limitation of this implementation is that setting the initial perturbation budget to 0 increases the
Algorithm 1 Strength-Adaptive PGD (SA-PGD) 1: Input: data x ∈ X , label y ∈ Y , model f , loss function ℓ, maximum perturbation budget ϵmax, minimum
adversarial loss ρ, perturbation budget ϵ, perturbation budget step size τ , SA-PGD step K, SA-PGD step size α
2: Output: x′ 3: x′ ← x; ϵ← 0 4: if ℓ(f(x′), y) ≥ ρ then 5: break 6: else 7: while ϵ < ϵmax do 8: ϵ← ϵ+ τ 9: for k = 1, ...,K do
10: x′ ← Πϵ(α · sign(∇x′ℓ(f(x′), y)) + x′) 11: if ℓ(f(x′), y) ≥ ρ then 12: return x′ 13: end if 14: end for 15: end while 16: end if
Algorithm 2 Strength-Adaptive Adversarial Training (SAAT) 1: Input: network fθ , training dataset S = {(xi, yi)}ni=1, learning rate η, number of epochs T , batch size
m, number of batches M 2: Output: adversarially robust network fθ with a target degree of robustness disparity 3: for epoch = 1, ..., T do 4: for mini-batch = 1, ...,M do 5: Sample a mini-batch {(xi, yi)}mi=1 from S 6: for i = 1, ...,m (in parallel) do 7: Obtain adversarial data x′i of xi by Algorithm 1 8: end for 9: θ ← θ − η 1
m ∑m i=1∇θℓ(fθ(x ′ i), yi)
10: end for 11: end for
computational cost and slows the speed, even though it ensures that the optimal perturbation budget will be estimated.
A maximally allowed perturbation budget (ϵmax) is introduced: we observe that even with the largest perturbation (e.g., ϵ = 255/255), there are still some examples (outliers) that fail to satisfy the minimum adversarial loss constraint. Considering that the pixel values are strictly sampled in [0, 255/255], it is necessary to introduce a maximum perturbation budget to avoid infinite loops.
Algorithm 1 is our strength-adaptive PGD method (SA-PGD), which returns the generated adversarial training data. Algorithm 2 is the proposed strength-adaptive adversarial training (SAAT). SAAT leverages Algorithm 1 for obtaining the qualified adversarial data to optimize the model parameters.
4 EXPERIMENTS
In this section, we conduct comprehensive experiments to evaluate the effectiveness of SAAT including its experimental setup (in Section 4.1), algorithm analysis (in Section 4.2), robustness evaluation (in Section 4.3), and the performance under different model capacities (in Section 4.4).
4.1 EXPERIMENTAL SETUP
Our code is implemented on the open source PyTorch framework with a single NVIDIA A100SXM4-40GB GPU. The code as well as related models will be released for public use and verification. We conduct experiments on ℓ∞ threat model and follow the hyper-parameter setting of Rice et al. (2020) for a fair comparison with the state-of-the-art AT methods. For training, the network is trained for 200 epochs using SGD with momentum 0.9, weight decay 5 × 10−4, and an initial learning rate of 0.1. The learning rate is divided by 10 at the 100-th and 150-th epoch, re-
spectively. Conventional data augmentation including random crops with 4 pixels of padding and random horizontal flips are applied. For adversary, we use SA-PGD (Algorithm 1) to generate adversarial training data. The step size α = 2/255 is used following standard PGD (Madry et al., 2017). We adopt the perturbation budget step size τ consistent with the SA-PGD step size α, such as τ = 2/255, and SA-PGD step K = 3, which is to make the adversarial data sufficiently attacked under the updated perturbation budget. For robustness evaluation, the output model is tested under a series of adversaries, including natural data, PGD (Madry et al., 2017), and Auto Attack (Croce & Hein, 2020b). Among them, natural accuracy and PGD accuracy can intuitively reflect the network’s robustness disparity. And AA is an ensemble of complementary attacks, consisting of three white-box attacks (APGD-CE (Croce & Hein, 2020b), APGD-DLR (Croce & Hein, 2020b), and FAB (Croce & Hein, 2020a)) and a black-box attack (Square Attack (Andriushchenko et al., 2020)). AA regards networks to be robust only if the model correctly classify adversarial data with all types of attacks, which is among the most reliable evaluation of adversarial robustness to date.
4.2 ANALYSIS OF THE PROPOSED ALGORITHM
We delve into SAAT to investigate its each component, including the role of minimum adversarial loss ρ and the impact of maximum perturbation budget ϵmax. All analysis experiments are conducted using PreAct ResNet-18 (He et al., 2016) on CIFAR10 dataset (Krizhevsky et al., 2009).
The role of minimum adversarial loss ρ. We empirically investigate the role of minimum adversarial loss by using different ρ to generate adversarial training data, where ϵmax is assigned a sufficiently large value, such as ϵmax = 128/255. The minimum adversarial loss ρ for SAAT varies from 0 to 2.2, and the evaluation results are summarized in Figure 3 (a). It is observed that the model’s degree of robustness disparity is well-correlated with the adversarial training loss ρ. When ρ = 0, there is a large performance gap between the natural accuracy and robust accuracy. This performance gap keeps decreasing as ρ increases. And when ρ = 2.2, the robust accuracy is almost the same as the natural accuracy. Note that SAAT fails to converge when ρ > 2.2, which might be explained by the fact that robust accuracy should be lower than natural accuracy for any adversarially-trained networks. The clear correlation between the minimum training loss and degree of robustness disparity enables our SAAT to flexibly control the performance gap in output networks. Moreover, we observed that ρ is also closely related to the robust generalization gap. The learning cruves of SAAT with different ρ is shown in Figure 3 (c), and their robust generalization gap is summarized in Figure 3 (d). It can be seen that the robust accuracy is always in sync with the natural accuracy during the training phase and there is no significant robustness degradation as in Figure 1(c). When ρ = 1.5, the robust generalization gap is already very small.
The impact of maximum perturbation budget ϵmax. We further investigate the impact of introduced maximum perturbation budget, by comparing the robustness performance of models trained using different ϵmax. Given a fixed ρ, such as ρ = 1.5, the value of ϵmax varies from 0 to 128/255, and the evaluation results are summarized in Figure 3 (b). It can be observed that when ϵmax is small, increasing ϵmax leads to a flatter degree of robustness disparity, which indicates that more model capacity is allocated to defend against adversarial attacks. As expected, when ϵmax is greater than a certain value, the degree of robustness disparity begins to maintain a plateau, which infers that most of the adversarial training data already met the minimum adversarial loss constraint, so the network tends to maintain steady robustness disparity bound by ρ. ϵmax adjusts the robustness disparity within the plateau constrained by ρ, which reflects a tradeoff between the natural accuracy and adversarial robustness and suggests that our ϵmax helps adjust this tradeoff. Notably, the observed tradeoff is not inherent in adversarial training but a consequence of model capacity scheduling. Note that the size of ϵmax is also influenced by minimum adversarial loss ρ. In the following section, we fine-tune both ϵmax and ρ for the robustness evaluation of SAAT.
4.3 ROBUSTNESS EVALUATION
Compared with the standard AT, the difference of SAAT mainly lies in weakening the easy-to-attack samples (the adversarial loss is higher than the minimal adversarial loss) and enhancing the hardto-attack samples (the adversarial loss is lower than the minimal adversarial loss). In this part, we investigate their respective effects on network adversarial robustness on two classic baselines: AT and AWP, where AWP can suppress robust overfitting and achieve state-of-the-art adversarial robust-
0
10
20
30
40
50
60
70
80
90
100
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget (*/255)
𝝆𝝆 0.0
2.2
(a) Role of minimum adversarial loss
0
10
20
30
40
50
60
70
80
90
100
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget (*/255)
𝝐𝝐𝒎𝒎𝒎𝒎𝒎𝒎 0
128/255
(b) Impact of maximum perturbation budget
0
10
20
30
40
50
60
70
80
90
100
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
10
20
30
40
50
60
70
80
90
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch
Natural training Adversarial training Natural test Adversarial test
0
10
20
30
40
50
60
70
80
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
0
5
10
15
20
25
30
35
40
45
50
0 50 100 150 200
A cc
ur ac
y (%
)
Epoch Natural training Adversarial training Natural test Adversarial test
(c) Learning curves of SAAT with ρ of 0.5, 1.0, 1.5 and 2.0 (from left to right)
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34-1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100 150 200
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SAAT (𝜌𝜌=0.5) SAAT (𝜌𝜌=1.0) SAAT (𝜌𝜌=1.5) SAAT (𝜌𝜌=2.0) Standard AT
(d) Robust generalization gap
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 12 14 16 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34-1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100 150 200
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SAAT (𝜌𝜌=0.5) SAAT (𝜌𝜌=1.0) SAAT (𝜌𝜌=1.5) SAAT (𝜌𝜌=2.0) Standard AT
25
30
35
40
45
50
55
60
65
0 2 4 6 8 10 2 4 6 18 20
A cc
ur ac
y (%
)
Perturbation budget ε (*/255) WRN-34-1 WRN-34-3 WRN-34-5 WRN-34-7
1.38 1.46
1.37
1.47
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
WRN-34 1 WRN-34-3 WRN-34-3 WRN-34-7
R ob
us tn
es s
di sp
ar ity
(% /p
ix el
)
Model architecture
0
5
10
15
20
25
30
35
40
0 50 100
R ob
us t g
en er
al iz
at io
n ga
p (%
)
Epoch SA T (𝜌𝜌=0.5) SAAT (𝜌𝜌=1. ) 𝜌𝜌 . ) SA T (𝜌𝜌=2.0) Standard
(e) SAAT under different model capacities
Figure 3: Robustness evaluation of SAAT with (a) varied ρ and (b) varied ϵmax; The (c) learning curve and (d) robust generalization gap with different ρ; (e) robustness disparity on networks with different model capacities.
ness. Specifically, we process the easy-to-attack samples and the hard-to-attack samples separately, denoted as SAATdown and SAATup, respectively. For SAATdown, the maximum adversarial budget remains the same as the standard AT, e.g., ϵmax = 8, and the minimal adversarial loss is set to 1.5. The evaluation results are shown in Table 1. As expected, weakening the easy-to-attack samples will make adversarial training inclined to the weak attack defense, which significantly increase the natural accuracy and can maintain the PGD accuracy, but it essentially not only aggravates the robustness disparity, but also degrades the model adversarial robustness (in term of AA). For SAATup, by increasing the maximum adversarial budget and minimal adversarial loss, more model capacity is used to defend against adversarial attacks, which significantly alleviates the robustness disparity of the output network and further enhances the model adversarial robustness. The performance evaluation of SAATup across different datasets, different model structures and different AT methods is provided in Appendix A, where SAATup boosts adversarial robustness under all settings, demonstrating the effectiveness of the proposed method. Note that the robust accuracy of SAATup at ϵ = 8 first increases and then decreases. This is defensible because as ϵ and ρ increase, the robustness disparity of the output model is getting flatter, and the adversarial robustness will also be migrated to a relatively larger perturbation budget. In addition, it can be observed that the performance gap between the last checkpoint and the best checkpoint of SAATup is shrinking, which illustrates that enhancing the hard-to-attack samples can also effectively alleviate robust overfitting.
4.4 PERFORMANCE UNDER DIFFERENT MODEL CAPACITIES
We extend SAAT to networks with different model capacities. Specifically, we perform SAAT on a series of Wide ResNet-34-x networks, where x is 1, 3, 5, and 7 respectively. Note that the larger the x, the larger the model capacity. ρ is fixed at 1.5 and ϵmax is sufficiently large, such
as ϵmax = 128/255. The evaluation results are summarized in Figure 3 (e). It can be observed that the output network can maintain a steady degree of robustness disparity across different model capacities. Such observations exactly reflect the nature of our approach that SAAT is able to adapt to networks of various model capacities.
5 CONCLUSION
We present a strength-adaptive adversarial training (SSAT) method in this paper. The proposed approach distinguish itself from others by using the minimum adversarial loss constraint for generating adversarial training data, which is adaptive to the dynamic training schedule and networks of various model capacities. We show that adversarial training loss is well-correlated with the degree of robustness disparity and robust generalization gap, empirically verify that SAAT can effectively alleviate robustness overfitting, mitigate the robustness disparity of output networks and further enhance the model adversarial robustness by adjusting the tradeoff of adversarial training, demonstrating the effectiveness of the proposed approach. We hope that the robustness disparity we offer can further improve the completeness of adversarial robustness evaluation methods (Carlini et al., 2019) and expect more new techniques to be proposed to achieve learning a fully robust network.
A APPENDIX
In this part, we conduct extended comparative experiments. Specifically, we conduct experiments under different datasets, different model structures and different AT methods. The results are summarized in Table 2 and Table 3. Experimental results show that the proposed method can achieve higher adversarial robustness under all settings, demonstrating the effectiveness of the proposed approach. | 1. What is the focus of the paper regarding adversarial training?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the experimental setup and results? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work proposes adversarial training with a threshold of minimum adversarial loss, i.e., while the training adversarial example cannot reach the minimum loss, expand its attack radius.
Strengths And Weaknesses
Strength:
Well written and easy to follow
Idea is straight forward
Weaknesses:
A little novelty. For me, setting a minimum adversarial loss during adversarial training is not exciting.
Weak experiments.
(a) There is just a trade-off between adversarial accuracy (under auto attack) and clean accuracy in the experiments, i.e., increase (adaptive) adversarial attack strength during training -> improve test adversarial accuracy and damage test clean accuracy.
(b) In the current development of adversarial training, Wide ResNet-34-10 on CIFAR-10/100 is necessary, where TRADES-AWP can get a better performance under auto attack. I wonder the performance of SAAT under these settings. (i.e., the results of Tab. 1 with WRN-34-10 and TRADES-AWP).
Experiment-based work without deep analysis.
Clarity, Quality, Novelty And Reproducibility
The manuscript is well written and clear. |
ICLR | Title
On Convergence and Stability of GANs
Abstract
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.
N/A
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.
1 INTRODUCTION
Generative modeling involves taking a set of samples drawn from an unknown data generating distribution Preal and finding an estimate Pmodel that closely resembles it. Generative adversarial networks (GAN) (Goodfellow et al., 2014) is a powerful framework used for fitting implicit generative models. The basic setup consists of two networks, the generator and the discriminator, playing against each other in a repeated zero-sum game setting. The goal here is to reach an equilibrium where Preal, Pmodel are close, and the alternating gradient updates procedure (AGD) is used to achieve this. However, this process is highly unstable and often results in mode collapse (Goodfellow, 2017). This calls for an deeper investigation into training dynamics of GANs.
In this paper, we propose studying GAN training dynamics as a repeated game in which both the players are using no-regret algorithms (Cesa-Bianchi & Lugosi, 2006) and discuss how AGD 1 falls under this paradigm. In contrast, much of the theory (Goodfellow et al., 2014; Arjovsky & Bottou, 2017) and recent developments (Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017) are based on the unrealistic assumption that the discriminator is playing optimally (in the function space) at each step and as a result, there is consistent minimization of a divergence between real and generated distributions. This corresponds to at least one player using the best-response algorithm (in the function space), and the resulting game dynamics can be completely different in both these cases (Nisan et al., 2007). Thus, there is a clear disconnect between theoretical arguments used as motivation in recent literature and what actually happens in practice.
We would like to point out that the latter view can still be useful for reasoning about the asymptotic equilibrium situation but we argue that regret minimization is the more appropriate way to think about GAN training dynamics. So, we analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We start with a short analysis of the artificial convex-concave case of the GAN game in section 2.2. This setting has a unique solution and guaranteed convergence (of averaged iterates) using no-regret algorithms can be shown with standard arguments from game theory literature. Here, we make explicit, the critical (previously not widely known) connection between AGD used in GAN training and regret minimization. This immediately yields a novel proof for the asymptotic convergence of GAN training, in the non-parametric limit. Prior to our work, such a result (Goodfellow et al., 2014) required a strong assumption that the discriminator is optimal at each step.
1Most of our analysis applies to the simultaneous gradient updates procedure as well
However, these convergence results do not hold when the game objective function is non-convex, which is the practical case when deep neural networks are used. In non-convex games, global regret minimization and equilibrium computation are computationally hard in general. Recent gametheoretic literature indicates that AGD can end up cycling (Mertikopoulos et al., 2017) or converging to a (potentially bad) local equilibrium, under some conditions (Hazan et al., 2017). We hypothesize these to be the reasons for cycling and mode collapse observed during GAN training, respectively (section 2.3). In this work, we do not explore the cycling issue but focus our attention on the mode collapse problem. In contrast to our hypothesis, the prevalent view of mode collapse and instability (Arjovsky & Bottou, 2017) is that it results from attempting to minimize a strong divergence during training. However, as we argued earlier, GAN training with AGD does not consistently minimize a divergence and therefore, such a theory is not suitable to discuss convergence or to address the stability issue.
Next, if mode collapse is indeed the result of an undesirable local equilibrium, a natural question then is how we can avoid it? We make a simple observation that, in the GAN game, mode collapse situations are often accompanied by sharp gradients of the discriminator function around some real data points (section 2.4). Therefore, a simple strategy to mitigate mode collapse is to regularize the discriminator so as to constrain its gradients in the ambient data space. We demonstrate that this improves the stability using a toy experiment with one hidden layer neural networks. This gives rise to a new explanation for why WGAN and gradient penalties might be improving the stability of GAN training – they are mitigating the mode collapse problem by keeping the gradients of the discriminator function small in data space. From this motivation, we propose a training algorithm involving a novel gradient penalty scheme called DRAGAN (Deep Regret Analytic Generative Adversarial Networks) which enables faster training, achieves improved stability and modeling performance (over WGAN-GP (Gulrajani et al., 2017) which is the state-of-the-art stable training procedure) across a variety of architectures and objective functions.
Below, we provide a short literature review. Several recent works focus on stabilizing the training of GANs. While some solutions (Radford et al., 2015; Salimans et al., 2016) require the usage of specific architectures (or) modeling objectives, some (Che et al., 2016; Zhao et al., 2016) significantly deviate from the original GAN framework. Other promising works in this direction (Metz et al., 2016; Arjovsky et al., 2017; Qi, 2017; Gulrajani et al., 2017) impose a significant computational overhead. Thus, a fast and versatile method for consistent stable training of GANs is still missing in the literature. Our work is aimed at addressing this.
To summarize, our contributions are as follows:
• We propose a new way of reasoning about the GAN training dynamics - by viewing AGD as regret minimization.
• We provide a novel proof for the asymptotic convergence of GAN training in the nonparametric limit and it does not require the discriminator to be optimal at each step.
• We discuss how AGD can converge to a potentially bad local equilibrium in non-convex games and hypothesize this to be responsible for mode collapse during GAN training.
• We characterize mode collapse situations with sharp gradients of the discriminator function around some real data points.
• A novel gradient penalty scheme called DRAGAN is introduced based on this observation and we demonstrate that it mitigates the mode collapse issue.
2 THEORETICAL ANALYSIS OF GAN TRAINING DYNAMICS
We start with a brief description of the GAN framework (section 2.1). We discuss guaranteed convergence in the artificial convex-concave case using no-regret algorithms, and make a critical connection between GAN training process (AGD) and regret minimization (section 2.2). This immediately yields a novel proof for the asymptotic convergence of GAN training in the nonparametric limit. Then, we consider the practical non-convex case and discuss how AGD can converge to a potentially bad local equilibrium here (section 2.3). We characterize mode collapse situations with sharp gradients of the discriminator function around real samples and this provides an effective strategy to avoid them. This naturally leads to the introduction of our gradient penalty
scheme DRAGAN (section 2.4). We end with a discussion and comparison with other gradient penalties in the literature (section 2.5).
2.1 BACKGROUND
The GAN framework can be viewed as a repeated zero-sum game, consisting of two players - the generator, which produces synthetic data given some noise source and the discriminator, which is trained to distinguish generator’s samples from the real data. The generator model G is parameterized by φ, takes a noise vector z as input, and produces a synthetic sampleGφ(z). The discriminator model D is parameterized by θ, takes a sample x as input and computes Dθ(x), which can be interpreted as the probability that x is real.
The models G, D can be selected from any arbitrary class of functions – in practice, GANs typical rely on deep networks for both. Their cost functions are defined as
J (D)(φ, θ) := −Ex∼preal logDθ(x)− Ez log(1−Dθ(Gφ(z))), and J (G)(φ, θ) := −J (D)(φ, θ)
And the complete game can be specified as -
min φ max θ
{ J(φ, θ) = Ex∼preal logDθ(x) + Ez log(1−Dθ(Gφ(z))) } The generator distribution Pmodel asymptotically converges to the real distribution Preal if updates are made in the function space and the discriminator is optimal at each step (Goodfellow et al., 2014).
2.2 CONVEX-CONCAVE CASE AND NO-REGRET ALGORITHMS
According to Sion’s theorem (Sion, 1958), if Φ ⊂ Rm, Θ ⊂ Rn such that they are compact and convex sets, and the function J : Φ×Θ→ R is convex in its first argument and concave in its second, then we have -
min φ∈Φ max θ∈Θ J(φ, θ) = max θ∈Θ min φ∈Φ J(φ, θ)
That is, an equilibrium is guaranteed to exist in this setting where players’ payoffs correspond to the unique value of the game (Neumann, 1928).
A natural question then is how we can find such an equilibrium. A simple procedure that players can use is best-response algorithms (BRD). In each round, best-responding players play their optimal strategy given their opponent’s current strategy. Despite its simplicity, BRD are often computationally intractable and they don’t lead to convergence even in simple games. In contrast, a technique that is both efficient and provably works is regret minimization. If both players update their parameters using no-regret algorithms, then it is easy to show that their averaged iterates will converge to an equilibrium pair (Nisan et al., 2007). Let us first define no-regret algorithms.
Definition 2.1 (No-regret algorithm). Given a sequence of convex loss functions L1, L2, . . . : K → R, an algorithm that selects a sequence of kt’s, each of which may only depend on previously observed L1, . . . , Lt−1, is said to have no regret if R(T ) T = o(1), where we define
R(T ) := ∑T t=1 Lt(kt)−mink∈K ∑T t=1 Lt(k)
We can apply no-regret learning to our problem of equilibrium finding in the GAN game J(·, ·) as follows. The generator imagines the function J(·, θt) as its loss function on round t, and similarly the discriminator imagines −J(φt, ·) as its loss function at t. After T rounds of play, each player computes the average iterates φ̄T := 1T ∑T t=1 φt and θ̄T := 1 T ∑T t=1 θt. If V
∗ is the equilibrium value of the game, and the players suffer regret R1(T ) and R2(T ) respectively, then one can show using standard arguments (Freund & Schapire, 1999) that -
V ∗ − R2(T )T ≤ maxθ∈Θ J(φ̄T , θ)− R2(T ) T ≤ minφ∈Φ J(φ, θ̄T ) + R1(T ) T ≤ V ∗ + R1(T )T .
In other words, θ̄T and φ̄T are "almost optimal" solutions to the game, where the "almost" approximation factor is given by the average regret terms R1(T )+R2(T )T . Under the no-regret
condition, the former will vanish, and hence we can guarantee convergence in the limit. Next, we define a popular family of no-regret algorithms.
Definition 2.2 (Follow The Regularized Leader). FTRL (Hazan et al., 2016) selects kt on round t by solving for arg mink∈K{ ∑t−1 s=1 Ls(k) + 1 ηΩ(k)}, where Ω(·) is some convex regularization function and η is a learning rate.
Remark: Roughly speaking, if you select the regularization as Ω(·) = 12‖ · ‖ 2, then FTRL becomes the well-known online gradient descent or OGD (Zinkevich, 2003). Ignoring the case of constraint violations, OGD can be written in a simple iterative form: kt = kt−1 − η∇Lt−1(kt−1). The typical GAN training procedure using alternating gradient updates (or simultaneous gradient updates) is almost this - both the players applying online gradient descent. Notice that the min/max objective function in GANs involves a stochastic component, with two randomized inputs given on each round, x and z which are sampled from the data distribution and a standard multivariate normal, respectively. Let us write Jx,z(φ, θ) := logDθ(x) + log(1−Dθ(Gφ(z))). Taking expectations with respect to x and z, we define the full (non-stochastic) game as J(φ, θ) = Ex,z [Jx,z(φ, θ)]. But the above online training procedure is still valid with stochastic inputs. That is, the equilibrium computation would proceed similarly, where on each round we sample xt and zt, and follow the updates
φt+1 ← φt − η∇φJxt,zt(φt, θt). and θt+1 ← θt + η ′ ∇θJxt,zt(φt, θt)
On a side note, a benefit of this stochastic perspective is that we can get a generalization bound on the mean parameters φ̄T after T rounds of optimization. The celebrated "online-to-batch conversion" (Cesa-Bianchi et al., 2004) implies that Ex,z[Jx,z(φ̄T , θ)], for any θ, is no more than the optimal value Ex,z[Jx,z(φ∗, θ)] plus an "estimation error" bounded by E [ R1(T )+R2(T )
T
] , where the expectation is
taken with respect to the sequence of samples observed along the way, and any randomness in the algorithm. Analogously, this applies to θ̄T as well. A limitation of this result, however, is that it requires a fresh sample xt to be used on every round.
To summarize, we discussed in this subsection about how the artificial convex-concave case is easy to solve through regret minimization. While this is a standard result in game theory and online learning literature, it is not widely known in the GAN literature. For instance, Salimans et al. (2016) and Goodfellow (2017) discuss a toy game which is convex-concave and show cycling behavior. But, the simple solution in that case is to just average the iterates. Further, we made explicit, the critical connection between regret minimization and alternating gradient updates procedure used for GAN training. Now, Goodfellow et al. (2014) argue that, if G and D have enough capacity (in the non-parametric limit) and updates are made in the function space, then the GAN game can be considered convex-concave. Thus, our analysis based on regret minimization immediately yields a novel proof for the asymptotic convergence of GANs, without requiring that the discriminator be optimal at each step.
Moreover, the connection between regret minimization and GAN training process gives a novel way to reason about its dynamics. In contrast, the popular view of GAN training as consistently minimizing a divergence arises if the discriminator uses BRD (in the function space) and thus, it has little to do with the actual training process of GANs. As a result, this calls into question the motivation behind many recent developments like WGAN and gradient penalties among others, which improve the training stability of GANs. In the next subsection, we discuss the practical non-convex case and why training instability arises. This provides the necessary ideas to investigate mode collapse from our new perspective.
2.3 NON-CONVEX CASE AND LOCAL EQUILIBRIA
In practice, we choose G, D to be deep neural networks and the function J(φ, θ) need not be convexconcave anymore. The nice properties we had in the convex-concave case like the existence of a unique solution and guaranteed convergence through regret minimization no longer hold. In fact, regret minimization and equilibrium computation are computationally hard in general non-convex settings. However, analogous to the case of non-convex optimization (also intractable) where we focus on finding local minima, we can look for tractable solution concepts in non-convex games.
Recent work by Hazan et al. (2017) introduces the notion of local regret and shows that if both the players use a smoothed variant of OGD to minimize this quantity, then the non-convex game converges to some form of local equilibrium, under mild assumptions. The usual training procedure of GANs (AGD) corresponds to using a window size of 1 in their formulation. Thus, GAN training will eventually converge (approximately) to a local equilibrium which is described below or the updates will cycle. We leave it to future works to explore the equally important cycling issue and focus here on the former case.
Definition 2.3 (Local Equilibrium). A pair (φ∗, θ∗) is called an -approximate local equilibrium if it holds that
∀φ ′ , ||φ ′ − φ∗|| ≤ η : J(φ∗, θ∗) ≤ J(φ ′ , θ∗) + ∀θ ′ , ||θ ′ − θ∗|| ≤ η : J(φ∗, θ∗) ≥ J(φ∗, θ ′ )−
That is, in a local equilibrium, both the players do not have much of an incentive to switch to any other strategy within a small neighborhood of their current strategies. Now, we turn our attention to the mode collapse issue which poses a significant challenge to the GAN training process. The training is said to have resulted in mode collapse if the generator ends up mapping multiple z vectors to the same output x, which is assigned a high probability of being real by the discriminator (Goodfellow, 2017). We hypothesize this to be the result of the game converging to bad local equilibria.
The prevalent view of mode collapse and instability in GAN training (Arjovsky & Bottou, 2017) is that it is caused due to the supports of real and model distributions being disjoint or lying on low-dimensional manifolds. The argument is that this would result in strong distance measures like KL-divergence or JS-divergence getting maxed out, and the generator cannot get useful gradients to learn. In fact, this is the motivation for the introduction of WGAN (Arjovsky et al., 2017). But, as we argued earlier, GAN training does not consistently minimize a divergence as that would require using intractable best-response algorithms. Hence, such a theory is not suitable to discuss convergence or to address the instability of GAN training. Our new view of GAN training process as regret minimization is closer to what is used in practice and provides an alternate explanation for mode collapse - the existence of undesirable local equilibria. The natural question now is how we can avoid them?
2.4 MODE COLLAPSE AND GRADIENT PENALTIES
The problem of dealing with multiple equilibria in games and how to avoid undesirable ones is an important question in algorithmic game theory (Nisan et al., 2007). In this work, we constrain ourselves to the GAN game and aim to characterize the undesirable local equilibria (mode collapse) in an effort to avoid them. In this direction, after empirically studying multiple mode collapse cases, we found that it is often accompanied by the discriminator function having sharp gradients around some real data points (See Figure 1 2). This intuitively makes sense from the definition of mode collapse discussed earlier. Such sharp gradients encourage the generator to map multiple z vectors to a single output x and lead the game towards a degenerate equilibrium. Now, a simple strategy to mitigate this failure case would be to regularize the discriminator using the following penalty -
λ · Ex∼Preal,δ∼Nd(0,cI) [ ‖∇xDθ(x+ δ)‖2 ] This strategy indeed improves the stability of GAN training. We show the results of a toy experiment with one hidden layer neural networks in Figure 2 and Figure 3 to demonstrate this. This partly explains the success of WGAN and gradient penalties in the recent literature (Gulrajani et al., 2017; Qi, 2017), and why they improve the training stability of GANs, despite being motivated by reasoning based on unrealistic assumptions. However, we noticed that this scheme in its current form can be brittle and if over-penalized, the discriminator can end up assigning both a real point x and noise x+ δ, the same probability of being real. Thus, a better choice of penalty is -
λ · Ex∼Preal,δ∼Nd(0,cI) [ max ( 0, ‖∇xDθ(x+ δ)‖2 − k ) ] Finally, due to practical optimization considerations (this has also been observed in Gulrajani et al. (2017)), we instead use the penalty shown below in all our experiments.
λ · Ex∼Preal,δ∼Nd(0,cI) [ ‖∇xDθ(x+ δ)‖ − k ]2 (1)
2At times, stochasticity seems to help in getting out of the basin of attraction of a bad equilibrium
This still works as long as small perturbations of real data, x+ δ are likely to lie off the data-manifold, which is true in the case of image domain and some other settings. Because, in these cases, we do want our discriminator to assign different probabilities of being real to training data and noisy samples. We caution the practitioners to keep this important point in mind while making their choice of penalty. All of the above schemes have the same effect of constraining the norm of discriminator’s gradients around real points to be small and can therefore, mitigate the mode collapse situation. We refer to GAN training using these penalty schemes or heuristics as the DRAGAN algorithm.
Additional details:
• We use the vanilla GAN objective in our experiments, but our penalty improves stability using other objective functions as well. This is demonstrated in section 3.3.
• The penalty scheme used in our experiments is the one shown in equation 1.
• We use small pixel-level noise but it is possible to find better ways of imposing this penalty. However, this exploration is beyond the scope of our paper.
• The optimal configuration of the hyperparameters for DRAGAN depends on the architecture, dataset and data domain. We set them to be λ ∼ 10, k = 1 and c ∼ 10 in most of our experiments.
2.5 COUPLED VS LOCAL PENALTIES
Several recent works have also proposed regularization schemes which constrain the discriminator’s gradients in the ambient data space, so as to improve the stability of GAN training. Despite being from different motivations, WGAN-GP and LS-GAN are closely related approaches to ours. First, we show that these two approaches are very similar, which is not widely known in the literature. Qi (2017) introduced LS-GAN with the idea of maintaining a margin between losses assigned to real and
fake samples. Further, they also impose Lipschitz constraint on D and the two conditions together result in a situation where the following holds for any real and fake sample pair (roughly) -
Dθ(x)−Dθ(Gφ(z)) ≈ ||x,Gφ(z)|| (2)
The authors argue that the resulting discriminator function would have non-vanishing gradients almost everywhere between real and fake samples (section 6 of Qi (2017)). Next, Gulrajani et al. (2017) proposed an extension to address various shortcomings of the original WGAN and they impose the following condition on D - ||∇xDθ(x̂)|| ≈ 1 (3) where x̂ = ( )x+(1− )Gφ(z) is some point on the line between a real and a fake sample, both chosen independently at random. This leads to D having norm-1 gradients almost everywhere between real and fake samples. Notice that this behavior is very similar to that of LS-GAN’s discriminator function. Thus, WGAN-GP is a slight variation of the original LS-GAN algorithm and we refer to these methods as “coupled penalties”.
On a side note, we also want to point out that WGAN-GP’s penalty doesn’t actually follow from KR-duality as claimed in their paper. By Lemma 1 of Gulrajani et al. (2017), the optimal discriminator D∗ will have norm-1 gradients (almost everywhere) only between those x and Gφ(z) pairs which are sampled from the optimal coupling or joint distribution π∗. Therefore, there is no basis for WGAN-GP’s penalty (equation 3) where arbitrary pairs of real and fake samples are used. This fact adds more credence to our theory regarding why gradient penalties might be mitigating mode collapse.
The most important distinction between coupled penalties and our methods is that we only impose gradient constraints in local regions around real samples. We refer to these penalty schemes as “local penalties”. Coupled penalties impose gradient constraints between real and generated samples and we point out some potential issues that arise from this:
• With adversarial training finding applications beyond fitting implicit generative models, penalties which depend on generated samples can be prohibitive.
• The resulting class of functions when coupled penalties are used will be highly restricted compared to our method and this affects modeling performance. We refer the reader to Figure 4 and appendix section 5.2.2 to see this effect.
• Our algorithm works with AGD, while WGAN-GP needs multiple inner iterations to optimize D. This is because the generated samples can be anywhere in the data space and they change from one iteration to the next. In contrast, we consistently regularize Dθ(x) only along the real data manifold.
To conclude, appropriate constraining of the discriminator’s gradients can mitigate mode collapse but we should be careful so that it doesn’t have any negative effects. We pointed out some issues with coupled penalties and how local penalties can help. We refer the reader to section 3 for further experimental results.
3 EXPERIMENTAL RESULTS
In section 3.1, we compare the modeling performance of our algorithm against vanilla GAN and WGAN variants in the standard DCGAN/CIFAR-10 setup. Section 3.2 demonstrates DRAGAN’s improved stability across a variety of architectures. In section 3.3, we show that our method also works with other objective functions. Appendix contains samples for inspection, some of the missing plots and additional results. Throughout, we use inception score (Salimans et al., 2016) which is a well-studied and reliable metric in the literature, and sample quality to measure the performance.
3.1 INCEPTION SCORES FOR CIFAR-10 USING DCGAN ARCHITECTURE
DCGAN is a family of architectures designed to perform well with the vanilla training procedure. They are ubiquitous in the GAN literature owing to the instability of vanilla GAN in general settings. We use this architecture to model CIFAR-10 and compare against vanilla GAN, WGAN and WGANGP. As WGANs need 5 discriminator iterations for every generator iteration, comparing the modeling performance can be tricky. To address this, we report two scores for vanilla GAN and DRAGAN - one using the same number of generator iterations as WGANs and one using the same number of discriminator iterations. The results are shown in Figure 5 and samples are included in the appendix (Figure 8). Notice that DRAGAN beats WGAN variants in both the configurations, while vanilla GAN is only slightly better. A key point to note here is that our algorithm is fast compared to WGANs, so in practice, the performance will be closer to the DRAGANd case. In the next section, we will show that if we move away from this specific architecture family, vanilla GAN training can become highly unstable and that DRAGAN penalty mitigates this issue.
3.2 MEASURING STABILITY AND PERFORMANCE ACROSS ARCHITECTURES
Ideally, we would want our training procedure to perform well in a stable fashion across a variety of architectures (other than DCGANs). Similar to Arjovsky et al. (2017) and Gulrajani et al. (2017), we remove the stabilizing components of DCGAN architecture and demonstrate improved stability & modeling performance compared to vanilla GAN training (see appendix section 5.2.3). However, this is a small set of architectures and it is not clear if there is an improvement in general.
To address this, we introduce a metric termed the BogoNet score to compare the stability & performance of different GAN training procedures. The basic idea is to choose random architectures for players G and D independently, and evaluate the performance of different algorithms in the resulting games. A good algorithm should achieve stable performance without failing to learn or resulting in mode collapse, despite the potentially imbalanced architectures. In our experiment, each player is assigned a network from a diverse pool of architectures belonging to three different families (MLP, ResNet, DCGAN).
To demonstrate that our algorithm performs better compared to vanilla GAN training and WGAN-GP, we created 100 such instances of hard games. Each instance is trained using these algorithms on CIFAR-10 (under similar conditions for a fixed number of generator iterations, which gives a slight advantage to WGAN-GP) and we plot how inception score changes over time. For each algorithm, we calculated the average of final inception scores and area under the curve (AUC) over all 100 instances. The results are shown in Table 1. Notice that we beat the other algorithms in both metrics, which indicates some improvement in stability and modeling performance.
Further, we perform some qualitative analysis to verify that BogoNet score indeed captures the improvements in stability. We create another set of 50 hard architectures and compare DRAGAN against vanilla GAN training. Each instance is allotted 5 points and we split this bounty between the two algorithms depending on their performance. If both perform well or perform poorly, they get 2.5 points each, so that we nullify the effect of such non-differentiating architectures. However, if one algorithm achieves stable performance compared to the other (in terms of failure to learn or mode collapses), we assign it higher portions of the bounty. Results were judged by two of the authors in a blind manner: The curves were shown side-by-side with the choice of algorithm for each side being randomized and unlabeled. The vanilla GAN received an average score of 92.5 while our algorithm achieved an average score of 157.5 and this correlates with BogoNet score from earlier. See appendix section 5.3 for some additional details regarding this experiment.
3.3 STABILITY USING DIFFERENT OBJECTIVE FUNCTIONS
Our algorithm improves stability across a variety of objective functions and we demonstrate this using the following experiment. Nowozin et al. (2016) show that we can interpret GAN training as minimizing various f -divergences when an appropriate game objective function is used. We show experiments using the objective functions developed for Forward KL, Reverse KL, Pearson χ2, Squared Hellinger, and Total Variation divergence minimization. We use a hard architecture from the previous subsection to demonstrate the improvements in stability. Our algorithm is stable in all cases except for the total variation case, while the vanilla algorithm failed in all the cases (see Figure 6 for two examples and Figure 15 in appendix for all five). Thus, practitioners can now choose their game objective from a larger set of functions and use DRAGAN (unlike WGANs which requires a specific objective function).
4 CONCLUSIONS
In this paper, we propose to study GAN training process as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view and hypothesize that mode collapse occurs due to the existence of undesirable local equilibria. A simple observation is made about how the mode collapse situation often exhibits sharp gradients of the discriminator function around some real data points. This characterization partly explains the workings of previously proposed WGAN and gradient penalties, and motivates our novel penalty scheme. We show evidence of improved stability using DRAGAN and the resulting improvements in modeling performance across a variety of settings. We leave it to future works to explore our ideas in more depth and come up with improved training algorithms.
5 APPENDIX
5.1 SAMPLES AND LATENT SPACE WALKS
In this section, we provide samples from an additional experiment run on CelebA dataset (Figure 7). The samples from the experiment in section 3.1 are shown in Figure 8. Further, Radford et al. (2015) suggest that walking on the manifold learned by the generator can expose signs of memorization. We use DCGAN architecture to model MNIST and CelebA datasets using DRAGAN penalty, and the latent space walks of the learned models are shown in Figure 9 and Figure 10. The results demonstrate that the generator is indeed learning smooth transitions between different images, when our algorithm is used.
5.2 ADDITIONAL EXPERIMENTS
5.2.1 ONE HIDDEN LAYER NETWORK TO MODEL MNIST
We design a simple experiment where G and D are both fully connected networks with just one hidden layer. Vanilla GAN performs poorly even in this simple case and we observe severe mode collapses. In contrast, our algorithm is stable throughout and obtains decent quality samples despite the constrained setup.
5.2.2 8-GAUSSIANS EXPERIMENT
We analyze the performance of WGAN-GP and DRAGAN on the 8-Gaussians dataset. As it can be seen in Figure 13, both of them approximately converge to the real distribution but notice that in the case of WGAN-GP, Dθ(x) seems overly constrained in the data space. In contrast, DRAGAN’s discriminator is more flexible.
Improved WGAN
DRAGAN
Figure 13: Comparing the performance of WGAN-GP and DRAGAN on the 8-Gaussians dataset. Orange is real samples, green is generated samples. The level sets of Dθ(x) are shown in the background, with yellow as high and purple as low.
5.2.3 STABILITY ACROSS DCGAN ARCHITECTURE VARIATIONS
DCGAN architecture has been designed following specific guidelines to make it stable (Radford et al., 2015). We restate the suggested rules here.
1. Use all-convolutional networks which learn their own spatial downsampling (discriminator) or upsampling (generator)
2. Remove fully connected hidden layers for deeper architectures
3. Use batch normalization in both the generator and the discriminator
4. Use ReLU activation in the generator for all layers except the output layer, which uses tanh
5. Use LeakyReLU activation in the discriminator for all layers
We show below that such constraints can be relaxed when using our algorithm and still maintain training stability. Below, we present a series of experiments in which we remove different stabilizing components from the DCGAN architecture and analyze the performance of our algorithm. Specifically, we choose the following four architectures which are difficult to train (in each case, we start with base DCGAN architecture and apply the changes) -
• No BN and a constant number of filters in the generator
• 4-layer 512-dim ReLU MLP generator
• tanh nonlinearities everywhere
• tanh nonlinearity in the generator and 4-layer 512-dim LeakyReLU MLP discriminator
Notice that, in each case, our algorithm is stable while the vanilla GAN training fails. A similar approach is used to demonstrate the stability of training procedures in Arjovsky et al. (2017) and Gulrajani et al. (2017).
(a) tanh activation (b) FC generator
5.2.4 STABILITY ACROSS OBJECTIVE FUNCTIONS
Due to space limitations, we only showed plots for two cases in section 3.3. Below we show the results for all five cases.
(a) Reverse KL (b) Pearson χ2
(c) Forward KL (d) Total Variation
5.3 BOGONET DETAILS
We used three families of architectures with probabilities - DCGAN (0.6), ResNet (0.2), MLP (0.2). Next, we further parameterized each family to create additional variation. For instance, the DCGAN family can result in networks with or without batch normalization, have LeakyReLU or Tanh nonlinearities. The number and width of filters, latent space dimensionality are some other possible variations in our experiment. Similarly, the number of layers and hidden units in each layer for MLP are chosen randomly. For ResNets, we chose their depth randomly. This creates a set of hard games which test the stability of a given training algorithm.
We showed qualitative analysis of the inception score plots in section 3.2 to verify that BogoNet score indeed captures the improvements in stability. Below, we show some examples of how the bounty splits were done. The plots in Figure 14 were scored as (averages are shown in DRAGAN, Vanilla GAN order):
A - (5, 0), B - (3.5, 1.5), C – (2.25, 2.75), D – (2, 3) | 1. What is the focus of the paper regarding regularization terms for discriminators in GANs?
2. What are the strengths of the proposed approach, particularly its simplicity and ease of implementation?
3. What are the weaknesses of the paper, specifically regarding the motivation and analysis of the regularization term?
4. Do you have any concerns about the choice of the regularization parameter's value and its impact on performance?
5. How does the reviewer assess the clarity, originality, and overall quality of the paper's content? | Review | Review
Summary
========
The authors present a new regularization term, inspired from game theory, which encourages the discriminator's gradient to have a norm equal to one. This leads to reduce the number of local minima, so that the behavior of the optimization scheme gets closer to the optimization of a zero-sum games with convex-concave functions.
Clarity
======
Overall, the paper is clear and well-written. However, the authors should motivate better the regularization introduced in section 2.3.
Originality
=========
The idea is novel and interesting. In addition, it is easy to implement it for any GANs since it requires only an additional regularization term. Moreover, the numerical experiments are in favor of the proposed method.
Comments
=========
- Why should the norm of the gradient should to be equal to 1 and not another value? Is this possible to improve the performance if we put an additional hyper-parameter instead?
- Are the performances greatly impacted by other value of lambda and c (the suggested parameter values are lambda = c = 10)?
- As mentioned in the paper, the regularization affects the modeling performance. Maybe the authors should add a comparison between different regularization parameters to illustrate the real impact of lambda and c on the performance.
- GANs performance is usually worse on very big dataset such as Imagenet. Does this regularization trick makes their performance better?
Post-rebuttal comments
---------------------------------
I modified my review score, according to the problems raised by Reviewer 1 and 3. Despite the idea looks pretty simple and present some advantages, the authors should go deeper in the analysis, especially because the idea is not so novel. |
ICLR | Title
On Convergence and Stability of GANs
Abstract
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.
N/A
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.
1 INTRODUCTION
Generative modeling involves taking a set of samples drawn from an unknown data generating distribution Preal and finding an estimate Pmodel that closely resembles it. Generative adversarial networks (GAN) (Goodfellow et al., 2014) is a powerful framework used for fitting implicit generative models. The basic setup consists of two networks, the generator and the discriminator, playing against each other in a repeated zero-sum game setting. The goal here is to reach an equilibrium where Preal, Pmodel are close, and the alternating gradient updates procedure (AGD) is used to achieve this. However, this process is highly unstable and often results in mode collapse (Goodfellow, 2017). This calls for an deeper investigation into training dynamics of GANs.
In this paper, we propose studying GAN training dynamics as a repeated game in which both the players are using no-regret algorithms (Cesa-Bianchi & Lugosi, 2006) and discuss how AGD 1 falls under this paradigm. In contrast, much of the theory (Goodfellow et al., 2014; Arjovsky & Bottou, 2017) and recent developments (Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017) are based on the unrealistic assumption that the discriminator is playing optimally (in the function space) at each step and as a result, there is consistent minimization of a divergence between real and generated distributions. This corresponds to at least one player using the best-response algorithm (in the function space), and the resulting game dynamics can be completely different in both these cases (Nisan et al., 2007). Thus, there is a clear disconnect between theoretical arguments used as motivation in recent literature and what actually happens in practice.
We would like to point out that the latter view can still be useful for reasoning about the asymptotic equilibrium situation but we argue that regret minimization is the more appropriate way to think about GAN training dynamics. So, we analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We start with a short analysis of the artificial convex-concave case of the GAN game in section 2.2. This setting has a unique solution and guaranteed convergence (of averaged iterates) using no-regret algorithms can be shown with standard arguments from game theory literature. Here, we make explicit, the critical (previously not widely known) connection between AGD used in GAN training and regret minimization. This immediately yields a novel proof for the asymptotic convergence of GAN training, in the non-parametric limit. Prior to our work, such a result (Goodfellow et al., 2014) required a strong assumption that the discriminator is optimal at each step.
1Most of our analysis applies to the simultaneous gradient updates procedure as well
However, these convergence results do not hold when the game objective function is non-convex, which is the practical case when deep neural networks are used. In non-convex games, global regret minimization and equilibrium computation are computationally hard in general. Recent gametheoretic literature indicates that AGD can end up cycling (Mertikopoulos et al., 2017) or converging to a (potentially bad) local equilibrium, under some conditions (Hazan et al., 2017). We hypothesize these to be the reasons for cycling and mode collapse observed during GAN training, respectively (section 2.3). In this work, we do not explore the cycling issue but focus our attention on the mode collapse problem. In contrast to our hypothesis, the prevalent view of mode collapse and instability (Arjovsky & Bottou, 2017) is that it results from attempting to minimize a strong divergence during training. However, as we argued earlier, GAN training with AGD does not consistently minimize a divergence and therefore, such a theory is not suitable to discuss convergence or to address the stability issue.
Next, if mode collapse is indeed the result of an undesirable local equilibrium, a natural question then is how we can avoid it? We make a simple observation that, in the GAN game, mode collapse situations are often accompanied by sharp gradients of the discriminator function around some real data points (section 2.4). Therefore, a simple strategy to mitigate mode collapse is to regularize the discriminator so as to constrain its gradients in the ambient data space. We demonstrate that this improves the stability using a toy experiment with one hidden layer neural networks. This gives rise to a new explanation for why WGAN and gradient penalties might be improving the stability of GAN training – they are mitigating the mode collapse problem by keeping the gradients of the discriminator function small in data space. From this motivation, we propose a training algorithm involving a novel gradient penalty scheme called DRAGAN (Deep Regret Analytic Generative Adversarial Networks) which enables faster training, achieves improved stability and modeling performance (over WGAN-GP (Gulrajani et al., 2017) which is the state-of-the-art stable training procedure) across a variety of architectures and objective functions.
Below, we provide a short literature review. Several recent works focus on stabilizing the training of GANs. While some solutions (Radford et al., 2015; Salimans et al., 2016) require the usage of specific architectures (or) modeling objectives, some (Che et al., 2016; Zhao et al., 2016) significantly deviate from the original GAN framework. Other promising works in this direction (Metz et al., 2016; Arjovsky et al., 2017; Qi, 2017; Gulrajani et al., 2017) impose a significant computational overhead. Thus, a fast and versatile method for consistent stable training of GANs is still missing in the literature. Our work is aimed at addressing this.
To summarize, our contributions are as follows:
• We propose a new way of reasoning about the GAN training dynamics - by viewing AGD as regret minimization.
• We provide a novel proof for the asymptotic convergence of GAN training in the nonparametric limit and it does not require the discriminator to be optimal at each step.
• We discuss how AGD can converge to a potentially bad local equilibrium in non-convex games and hypothesize this to be responsible for mode collapse during GAN training.
• We characterize mode collapse situations with sharp gradients of the discriminator function around some real data points.
• A novel gradient penalty scheme called DRAGAN is introduced based on this observation and we demonstrate that it mitigates the mode collapse issue.
2 THEORETICAL ANALYSIS OF GAN TRAINING DYNAMICS
We start with a brief description of the GAN framework (section 2.1). We discuss guaranteed convergence in the artificial convex-concave case using no-regret algorithms, and make a critical connection between GAN training process (AGD) and regret minimization (section 2.2). This immediately yields a novel proof for the asymptotic convergence of GAN training in the nonparametric limit. Then, we consider the practical non-convex case and discuss how AGD can converge to a potentially bad local equilibrium here (section 2.3). We characterize mode collapse situations with sharp gradients of the discriminator function around real samples and this provides an effective strategy to avoid them. This naturally leads to the introduction of our gradient penalty
scheme DRAGAN (section 2.4). We end with a discussion and comparison with other gradient penalties in the literature (section 2.5).
2.1 BACKGROUND
The GAN framework can be viewed as a repeated zero-sum game, consisting of two players - the generator, which produces synthetic data given some noise source and the discriminator, which is trained to distinguish generator’s samples from the real data. The generator model G is parameterized by φ, takes a noise vector z as input, and produces a synthetic sampleGφ(z). The discriminator model D is parameterized by θ, takes a sample x as input and computes Dθ(x), which can be interpreted as the probability that x is real.
The models G, D can be selected from any arbitrary class of functions – in practice, GANs typical rely on deep networks for both. Their cost functions are defined as
J (D)(φ, θ) := −Ex∼preal logDθ(x)− Ez log(1−Dθ(Gφ(z))), and J (G)(φ, θ) := −J (D)(φ, θ)
And the complete game can be specified as -
min φ max θ
{ J(φ, θ) = Ex∼preal logDθ(x) + Ez log(1−Dθ(Gφ(z))) } The generator distribution Pmodel asymptotically converges to the real distribution Preal if updates are made in the function space and the discriminator is optimal at each step (Goodfellow et al., 2014).
2.2 CONVEX-CONCAVE CASE AND NO-REGRET ALGORITHMS
According to Sion’s theorem (Sion, 1958), if Φ ⊂ Rm, Θ ⊂ Rn such that they are compact and convex sets, and the function J : Φ×Θ→ R is convex in its first argument and concave in its second, then we have -
min φ∈Φ max θ∈Θ J(φ, θ) = max θ∈Θ min φ∈Φ J(φ, θ)
That is, an equilibrium is guaranteed to exist in this setting where players’ payoffs correspond to the unique value of the game (Neumann, 1928).
A natural question then is how we can find such an equilibrium. A simple procedure that players can use is best-response algorithms (BRD). In each round, best-responding players play their optimal strategy given their opponent’s current strategy. Despite its simplicity, BRD are often computationally intractable and they don’t lead to convergence even in simple games. In contrast, a technique that is both efficient and provably works is regret minimization. If both players update their parameters using no-regret algorithms, then it is easy to show that their averaged iterates will converge to an equilibrium pair (Nisan et al., 2007). Let us first define no-regret algorithms.
Definition 2.1 (No-regret algorithm). Given a sequence of convex loss functions L1, L2, . . . : K → R, an algorithm that selects a sequence of kt’s, each of which may only depend on previously observed L1, . . . , Lt−1, is said to have no regret if R(T ) T = o(1), where we define
R(T ) := ∑T t=1 Lt(kt)−mink∈K ∑T t=1 Lt(k)
We can apply no-regret learning to our problem of equilibrium finding in the GAN game J(·, ·) as follows. The generator imagines the function J(·, θt) as its loss function on round t, and similarly the discriminator imagines −J(φt, ·) as its loss function at t. After T rounds of play, each player computes the average iterates φ̄T := 1T ∑T t=1 φt and θ̄T := 1 T ∑T t=1 θt. If V
∗ is the equilibrium value of the game, and the players suffer regret R1(T ) and R2(T ) respectively, then one can show using standard arguments (Freund & Schapire, 1999) that -
V ∗ − R2(T )T ≤ maxθ∈Θ J(φ̄T , θ)− R2(T ) T ≤ minφ∈Φ J(φ, θ̄T ) + R1(T ) T ≤ V ∗ + R1(T )T .
In other words, θ̄T and φ̄T are "almost optimal" solutions to the game, where the "almost" approximation factor is given by the average regret terms R1(T )+R2(T )T . Under the no-regret
condition, the former will vanish, and hence we can guarantee convergence in the limit. Next, we define a popular family of no-regret algorithms.
Definition 2.2 (Follow The Regularized Leader). FTRL (Hazan et al., 2016) selects kt on round t by solving for arg mink∈K{ ∑t−1 s=1 Ls(k) + 1 ηΩ(k)}, where Ω(·) is some convex regularization function and η is a learning rate.
Remark: Roughly speaking, if you select the regularization as Ω(·) = 12‖ · ‖ 2, then FTRL becomes the well-known online gradient descent or OGD (Zinkevich, 2003). Ignoring the case of constraint violations, OGD can be written in a simple iterative form: kt = kt−1 − η∇Lt−1(kt−1). The typical GAN training procedure using alternating gradient updates (or simultaneous gradient updates) is almost this - both the players applying online gradient descent. Notice that the min/max objective function in GANs involves a stochastic component, with two randomized inputs given on each round, x and z which are sampled from the data distribution and a standard multivariate normal, respectively. Let us write Jx,z(φ, θ) := logDθ(x) + log(1−Dθ(Gφ(z))). Taking expectations with respect to x and z, we define the full (non-stochastic) game as J(φ, θ) = Ex,z [Jx,z(φ, θ)]. But the above online training procedure is still valid with stochastic inputs. That is, the equilibrium computation would proceed similarly, where on each round we sample xt and zt, and follow the updates
φt+1 ← φt − η∇φJxt,zt(φt, θt). and θt+1 ← θt + η ′ ∇θJxt,zt(φt, θt)
On a side note, a benefit of this stochastic perspective is that we can get a generalization bound on the mean parameters φ̄T after T rounds of optimization. The celebrated "online-to-batch conversion" (Cesa-Bianchi et al., 2004) implies that Ex,z[Jx,z(φ̄T , θ)], for any θ, is no more than the optimal value Ex,z[Jx,z(φ∗, θ)] plus an "estimation error" bounded by E [ R1(T )+R2(T )
T
] , where the expectation is
taken with respect to the sequence of samples observed along the way, and any randomness in the algorithm. Analogously, this applies to θ̄T as well. A limitation of this result, however, is that it requires a fresh sample xt to be used on every round.
To summarize, we discussed in this subsection about how the artificial convex-concave case is easy to solve through regret minimization. While this is a standard result in game theory and online learning literature, it is not widely known in the GAN literature. For instance, Salimans et al. (2016) and Goodfellow (2017) discuss a toy game which is convex-concave and show cycling behavior. But, the simple solution in that case is to just average the iterates. Further, we made explicit, the critical connection between regret minimization and alternating gradient updates procedure used for GAN training. Now, Goodfellow et al. (2014) argue that, if G and D have enough capacity (in the non-parametric limit) and updates are made in the function space, then the GAN game can be considered convex-concave. Thus, our analysis based on regret minimization immediately yields a novel proof for the asymptotic convergence of GANs, without requiring that the discriminator be optimal at each step.
Moreover, the connection between regret minimization and GAN training process gives a novel way to reason about its dynamics. In contrast, the popular view of GAN training as consistently minimizing a divergence arises if the discriminator uses BRD (in the function space) and thus, it has little to do with the actual training process of GANs. As a result, this calls into question the motivation behind many recent developments like WGAN and gradient penalties among others, which improve the training stability of GANs. In the next subsection, we discuss the practical non-convex case and why training instability arises. This provides the necessary ideas to investigate mode collapse from our new perspective.
2.3 NON-CONVEX CASE AND LOCAL EQUILIBRIA
In practice, we choose G, D to be deep neural networks and the function J(φ, θ) need not be convexconcave anymore. The nice properties we had in the convex-concave case like the existence of a unique solution and guaranteed convergence through regret minimization no longer hold. In fact, regret minimization and equilibrium computation are computationally hard in general non-convex settings. However, analogous to the case of non-convex optimization (also intractable) where we focus on finding local minima, we can look for tractable solution concepts in non-convex games.
Recent work by Hazan et al. (2017) introduces the notion of local regret and shows that if both the players use a smoothed variant of OGD to minimize this quantity, then the non-convex game converges to some form of local equilibrium, under mild assumptions. The usual training procedure of GANs (AGD) corresponds to using a window size of 1 in their formulation. Thus, GAN training will eventually converge (approximately) to a local equilibrium which is described below or the updates will cycle. We leave it to future works to explore the equally important cycling issue and focus here on the former case.
Definition 2.3 (Local Equilibrium). A pair (φ∗, θ∗) is called an -approximate local equilibrium if it holds that
∀φ ′ , ||φ ′ − φ∗|| ≤ η : J(φ∗, θ∗) ≤ J(φ ′ , θ∗) + ∀θ ′ , ||θ ′ − θ∗|| ≤ η : J(φ∗, θ∗) ≥ J(φ∗, θ ′ )−
That is, in a local equilibrium, both the players do not have much of an incentive to switch to any other strategy within a small neighborhood of their current strategies. Now, we turn our attention to the mode collapse issue which poses a significant challenge to the GAN training process. The training is said to have resulted in mode collapse if the generator ends up mapping multiple z vectors to the same output x, which is assigned a high probability of being real by the discriminator (Goodfellow, 2017). We hypothesize this to be the result of the game converging to bad local equilibria.
The prevalent view of mode collapse and instability in GAN training (Arjovsky & Bottou, 2017) is that it is caused due to the supports of real and model distributions being disjoint or lying on low-dimensional manifolds. The argument is that this would result in strong distance measures like KL-divergence or JS-divergence getting maxed out, and the generator cannot get useful gradients to learn. In fact, this is the motivation for the introduction of WGAN (Arjovsky et al., 2017). But, as we argued earlier, GAN training does not consistently minimize a divergence as that would require using intractable best-response algorithms. Hence, such a theory is not suitable to discuss convergence or to address the instability of GAN training. Our new view of GAN training process as regret minimization is closer to what is used in practice and provides an alternate explanation for mode collapse - the existence of undesirable local equilibria. The natural question now is how we can avoid them?
2.4 MODE COLLAPSE AND GRADIENT PENALTIES
The problem of dealing with multiple equilibria in games and how to avoid undesirable ones is an important question in algorithmic game theory (Nisan et al., 2007). In this work, we constrain ourselves to the GAN game and aim to characterize the undesirable local equilibria (mode collapse) in an effort to avoid them. In this direction, after empirically studying multiple mode collapse cases, we found that it is often accompanied by the discriminator function having sharp gradients around some real data points (See Figure 1 2). This intuitively makes sense from the definition of mode collapse discussed earlier. Such sharp gradients encourage the generator to map multiple z vectors to a single output x and lead the game towards a degenerate equilibrium. Now, a simple strategy to mitigate this failure case would be to regularize the discriminator using the following penalty -
λ · Ex∼Preal,δ∼Nd(0,cI) [ ‖∇xDθ(x+ δ)‖2 ] This strategy indeed improves the stability of GAN training. We show the results of a toy experiment with one hidden layer neural networks in Figure 2 and Figure 3 to demonstrate this. This partly explains the success of WGAN and gradient penalties in the recent literature (Gulrajani et al., 2017; Qi, 2017), and why they improve the training stability of GANs, despite being motivated by reasoning based on unrealistic assumptions. However, we noticed that this scheme in its current form can be brittle and if over-penalized, the discriminator can end up assigning both a real point x and noise x+ δ, the same probability of being real. Thus, a better choice of penalty is -
λ · Ex∼Preal,δ∼Nd(0,cI) [ max ( 0, ‖∇xDθ(x+ δ)‖2 − k ) ] Finally, due to practical optimization considerations (this has also been observed in Gulrajani et al. (2017)), we instead use the penalty shown below in all our experiments.
λ · Ex∼Preal,δ∼Nd(0,cI) [ ‖∇xDθ(x+ δ)‖ − k ]2 (1)
2At times, stochasticity seems to help in getting out of the basin of attraction of a bad equilibrium
This still works as long as small perturbations of real data, x+ δ are likely to lie off the data-manifold, which is true in the case of image domain and some other settings. Because, in these cases, we do want our discriminator to assign different probabilities of being real to training data and noisy samples. We caution the practitioners to keep this important point in mind while making their choice of penalty. All of the above schemes have the same effect of constraining the norm of discriminator’s gradients around real points to be small and can therefore, mitigate the mode collapse situation. We refer to GAN training using these penalty schemes or heuristics as the DRAGAN algorithm.
Additional details:
• We use the vanilla GAN objective in our experiments, but our penalty improves stability using other objective functions as well. This is demonstrated in section 3.3.
• The penalty scheme used in our experiments is the one shown in equation 1.
• We use small pixel-level noise but it is possible to find better ways of imposing this penalty. However, this exploration is beyond the scope of our paper.
• The optimal configuration of the hyperparameters for DRAGAN depends on the architecture, dataset and data domain. We set them to be λ ∼ 10, k = 1 and c ∼ 10 in most of our experiments.
2.5 COUPLED VS LOCAL PENALTIES
Several recent works have also proposed regularization schemes which constrain the discriminator’s gradients in the ambient data space, so as to improve the stability of GAN training. Despite being from different motivations, WGAN-GP and LS-GAN are closely related approaches to ours. First, we show that these two approaches are very similar, which is not widely known in the literature. Qi (2017) introduced LS-GAN with the idea of maintaining a margin between losses assigned to real and
fake samples. Further, they also impose Lipschitz constraint on D and the two conditions together result in a situation where the following holds for any real and fake sample pair (roughly) -
Dθ(x)−Dθ(Gφ(z)) ≈ ||x,Gφ(z)|| (2)
The authors argue that the resulting discriminator function would have non-vanishing gradients almost everywhere between real and fake samples (section 6 of Qi (2017)). Next, Gulrajani et al. (2017) proposed an extension to address various shortcomings of the original WGAN and they impose the following condition on D - ||∇xDθ(x̂)|| ≈ 1 (3) where x̂ = ( )x+(1− )Gφ(z) is some point on the line between a real and a fake sample, both chosen independently at random. This leads to D having norm-1 gradients almost everywhere between real and fake samples. Notice that this behavior is very similar to that of LS-GAN’s discriminator function. Thus, WGAN-GP is a slight variation of the original LS-GAN algorithm and we refer to these methods as “coupled penalties”.
On a side note, we also want to point out that WGAN-GP’s penalty doesn’t actually follow from KR-duality as claimed in their paper. By Lemma 1 of Gulrajani et al. (2017), the optimal discriminator D∗ will have norm-1 gradients (almost everywhere) only between those x and Gφ(z) pairs which are sampled from the optimal coupling or joint distribution π∗. Therefore, there is no basis for WGAN-GP’s penalty (equation 3) where arbitrary pairs of real and fake samples are used. This fact adds more credence to our theory regarding why gradient penalties might be mitigating mode collapse.
The most important distinction between coupled penalties and our methods is that we only impose gradient constraints in local regions around real samples. We refer to these penalty schemes as “local penalties”. Coupled penalties impose gradient constraints between real and generated samples and we point out some potential issues that arise from this:
• With adversarial training finding applications beyond fitting implicit generative models, penalties which depend on generated samples can be prohibitive.
• The resulting class of functions when coupled penalties are used will be highly restricted compared to our method and this affects modeling performance. We refer the reader to Figure 4 and appendix section 5.2.2 to see this effect.
• Our algorithm works with AGD, while WGAN-GP needs multiple inner iterations to optimize D. This is because the generated samples can be anywhere in the data space and they change from one iteration to the next. In contrast, we consistently regularize Dθ(x) only along the real data manifold.
To conclude, appropriate constraining of the discriminator’s gradients can mitigate mode collapse but we should be careful so that it doesn’t have any negative effects. We pointed out some issues with coupled penalties and how local penalties can help. We refer the reader to section 3 for further experimental results.
3 EXPERIMENTAL RESULTS
In section 3.1, we compare the modeling performance of our algorithm against vanilla GAN and WGAN variants in the standard DCGAN/CIFAR-10 setup. Section 3.2 demonstrates DRAGAN’s improved stability across a variety of architectures. In section 3.3, we show that our method also works with other objective functions. Appendix contains samples for inspection, some of the missing plots and additional results. Throughout, we use inception score (Salimans et al., 2016) which is a well-studied and reliable metric in the literature, and sample quality to measure the performance.
3.1 INCEPTION SCORES FOR CIFAR-10 USING DCGAN ARCHITECTURE
DCGAN is a family of architectures designed to perform well with the vanilla training procedure. They are ubiquitous in the GAN literature owing to the instability of vanilla GAN in general settings. We use this architecture to model CIFAR-10 and compare against vanilla GAN, WGAN and WGANGP. As WGANs need 5 discriminator iterations for every generator iteration, comparing the modeling performance can be tricky. To address this, we report two scores for vanilla GAN and DRAGAN - one using the same number of generator iterations as WGANs and one using the same number of discriminator iterations. The results are shown in Figure 5 and samples are included in the appendix (Figure 8). Notice that DRAGAN beats WGAN variants in both the configurations, while vanilla GAN is only slightly better. A key point to note here is that our algorithm is fast compared to WGANs, so in practice, the performance will be closer to the DRAGANd case. In the next section, we will show that if we move away from this specific architecture family, vanilla GAN training can become highly unstable and that DRAGAN penalty mitigates this issue.
3.2 MEASURING STABILITY AND PERFORMANCE ACROSS ARCHITECTURES
Ideally, we would want our training procedure to perform well in a stable fashion across a variety of architectures (other than DCGANs). Similar to Arjovsky et al. (2017) and Gulrajani et al. (2017), we remove the stabilizing components of DCGAN architecture and demonstrate improved stability & modeling performance compared to vanilla GAN training (see appendix section 5.2.3). However, this is a small set of architectures and it is not clear if there is an improvement in general.
To address this, we introduce a metric termed the BogoNet score to compare the stability & performance of different GAN training procedures. The basic idea is to choose random architectures for players G and D independently, and evaluate the performance of different algorithms in the resulting games. A good algorithm should achieve stable performance without failing to learn or resulting in mode collapse, despite the potentially imbalanced architectures. In our experiment, each player is assigned a network from a diverse pool of architectures belonging to three different families (MLP, ResNet, DCGAN).
To demonstrate that our algorithm performs better compared to vanilla GAN training and WGAN-GP, we created 100 such instances of hard games. Each instance is trained using these algorithms on CIFAR-10 (under similar conditions for a fixed number of generator iterations, which gives a slight advantage to WGAN-GP) and we plot how inception score changes over time. For each algorithm, we calculated the average of final inception scores and area under the curve (AUC) over all 100 instances. The results are shown in Table 1. Notice that we beat the other algorithms in both metrics, which indicates some improvement in stability and modeling performance.
Further, we perform some qualitative analysis to verify that BogoNet score indeed captures the improvements in stability. We create another set of 50 hard architectures and compare DRAGAN against vanilla GAN training. Each instance is allotted 5 points and we split this bounty between the two algorithms depending on their performance. If both perform well or perform poorly, they get 2.5 points each, so that we nullify the effect of such non-differentiating architectures. However, if one algorithm achieves stable performance compared to the other (in terms of failure to learn or mode collapses), we assign it higher portions of the bounty. Results were judged by two of the authors in a blind manner: The curves were shown side-by-side with the choice of algorithm for each side being randomized and unlabeled. The vanilla GAN received an average score of 92.5 while our algorithm achieved an average score of 157.5 and this correlates with BogoNet score from earlier. See appendix section 5.3 for some additional details regarding this experiment.
3.3 STABILITY USING DIFFERENT OBJECTIVE FUNCTIONS
Our algorithm improves stability across a variety of objective functions and we demonstrate this using the following experiment. Nowozin et al. (2016) show that we can interpret GAN training as minimizing various f -divergences when an appropriate game objective function is used. We show experiments using the objective functions developed for Forward KL, Reverse KL, Pearson χ2, Squared Hellinger, and Total Variation divergence minimization. We use a hard architecture from the previous subsection to demonstrate the improvements in stability. Our algorithm is stable in all cases except for the total variation case, while the vanilla algorithm failed in all the cases (see Figure 6 for two examples and Figure 15 in appendix for all five). Thus, practitioners can now choose their game objective from a larger set of functions and use DRAGAN (unlike WGANs which requires a specific objective function).
4 CONCLUSIONS
In this paper, we propose to study GAN training process as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view and hypothesize that mode collapse occurs due to the existence of undesirable local equilibria. A simple observation is made about how the mode collapse situation often exhibits sharp gradients of the discriminator function around some real data points. This characterization partly explains the workings of previously proposed WGAN and gradient penalties, and motivates our novel penalty scheme. We show evidence of improved stability using DRAGAN and the resulting improvements in modeling performance across a variety of settings. We leave it to future works to explore our ideas in more depth and come up with improved training algorithms.
5 APPENDIX
5.1 SAMPLES AND LATENT SPACE WALKS
In this section, we provide samples from an additional experiment run on CelebA dataset (Figure 7). The samples from the experiment in section 3.1 are shown in Figure 8. Further, Radford et al. (2015) suggest that walking on the manifold learned by the generator can expose signs of memorization. We use DCGAN architecture to model MNIST and CelebA datasets using DRAGAN penalty, and the latent space walks of the learned models are shown in Figure 9 and Figure 10. The results demonstrate that the generator is indeed learning smooth transitions between different images, when our algorithm is used.
5.2 ADDITIONAL EXPERIMENTS
5.2.1 ONE HIDDEN LAYER NETWORK TO MODEL MNIST
We design a simple experiment where G and D are both fully connected networks with just one hidden layer. Vanilla GAN performs poorly even in this simple case and we observe severe mode collapses. In contrast, our algorithm is stable throughout and obtains decent quality samples despite the constrained setup.
5.2.2 8-GAUSSIANS EXPERIMENT
We analyze the performance of WGAN-GP and DRAGAN on the 8-Gaussians dataset. As it can be seen in Figure 13, both of them approximately converge to the real distribution but notice that in the case of WGAN-GP, Dθ(x) seems overly constrained in the data space. In contrast, DRAGAN’s discriminator is more flexible.
Improved WGAN
DRAGAN
Figure 13: Comparing the performance of WGAN-GP and DRAGAN on the 8-Gaussians dataset. Orange is real samples, green is generated samples. The level sets of Dθ(x) are shown in the background, with yellow as high and purple as low.
5.2.3 STABILITY ACROSS DCGAN ARCHITECTURE VARIATIONS
DCGAN architecture has been designed following specific guidelines to make it stable (Radford et al., 2015). We restate the suggested rules here.
1. Use all-convolutional networks which learn their own spatial downsampling (discriminator) or upsampling (generator)
2. Remove fully connected hidden layers for deeper architectures
3. Use batch normalization in both the generator and the discriminator
4. Use ReLU activation in the generator for all layers except the output layer, which uses tanh
5. Use LeakyReLU activation in the discriminator for all layers
We show below that such constraints can be relaxed when using our algorithm and still maintain training stability. Below, we present a series of experiments in which we remove different stabilizing components from the DCGAN architecture and analyze the performance of our algorithm. Specifically, we choose the following four architectures which are difficult to train (in each case, we start with base DCGAN architecture and apply the changes) -
• No BN and a constant number of filters in the generator
• 4-layer 512-dim ReLU MLP generator
• tanh nonlinearities everywhere
• tanh nonlinearity in the generator and 4-layer 512-dim LeakyReLU MLP discriminator
Notice that, in each case, our algorithm is stable while the vanilla GAN training fails. A similar approach is used to demonstrate the stability of training procedures in Arjovsky et al. (2017) and Gulrajani et al. (2017).
(a) tanh activation (b) FC generator
5.2.4 STABILITY ACROSS OBJECTIVE FUNCTIONS
Due to space limitations, we only showed plots for two cases in section 3.3. Below we show the results for all five cases.
(a) Reverse KL (b) Pearson χ2
(c) Forward KL (d) Total Variation
5.3 BOGONET DETAILS
We used three families of architectures with probabilities - DCGAN (0.6), ResNet (0.2), MLP (0.2). Next, we further parameterized each family to create additional variation. For instance, the DCGAN family can result in networks with or without batch normalization, have LeakyReLU or Tanh nonlinearities. The number and width of filters, latent space dimensionality are some other possible variations in our experiment. Similarly, the number of layers and hidden units in each layer for MLP are chosen randomly. For ResNets, we chose their depth randomly. This creates a set of hard games which test the stability of a given training algorithm.
We showed qualitative analysis of the inception score plots in section 3.2 to verify that BogoNet score indeed captures the improvements in stability. Below, we show some examples of how the bounty splits were done. The plots in Figure 14 were scored as (averages are shown in DRAGAN, Vanilla GAN order):
A - (5, 0), B - (3.5, 1.5), C – (2.25, 2.75), D – (2, 3) | 1. What is the focus of the review, and what are the reviewer's main concerns regarding the paper's contribution and scientific significance?
2. How does the reviewer question the author's justification for choosing their regularization method, and what does the reviewer suggest as an alternative?
3. Why does the reviewer ask about the choice of hyperparameter lambda and its effect on the results, and what does the reviewer propose as an additional experiment?
4. What is the bogonet score, and what two questions does the reviewer have regarding this experiment? | Review | Review
This paper addresses the well-known stability problem encountered when training GANs. As many other papers, they suggest adding a regularization penalty on the discriminator which penalizes the gradient with respect to the data, effectively linearizing the data manifold.
Relevance: Although I think some of the empirical results provided in the paper are interesting, I doubt the scientific contribution of this paper is significant. First of all, the penalty the author suggest is the same as the one suggest by Gulrajani for Wasserstein GAN (there the motivation behind this penalty comes from the optimal transport plan). In this paper, the author apply the same penalty to the GAN objective with the alternative update rule which is also a lower-bound for the Wasserstein distance.
Justification: The authors justify the choice of their regularization saying it linearizes the objective along the data manifold and claim it reduces the number of non-optimal fixed points. This might be true in the data space but the GAN objective is optimized over the parameter space and it is therefore not clear to me their argument hold w.r.t to the network parameters. Can you please comment on this?
Regularizing the generator: Can the authors motivate their choice for regularizing the discriminator only, and not the generator? Following their reasoning of linearizing the objective, the same argument should apply to the generator.
Comparison to existing work: This is not the first paper that suggests adding a regularization. Given that the theoretical aspect of the paper are rather weak, I would at least expect a comparison to existing regularization methods, e.g.
Stabilizing training of generative adversarial networks through regularization. NIPS, 2017
Choice of hyper-parameters: The authors say that the suggested value for lambda is 10. Can you comment on the choice of this parameter and how it affect the results? Have you tried annealing lambda? This is a common procedure in optimization (see e.g. homotopy or continuation methods).
Bogonet score: I very much like the experiment where the authors select 100 different architectures to compare their method against the vanilla GAN approach. I here have 2 questions:
- Did you do a deeper examination of your results, e.g. was there some architectures for which none of the method performed well?
- Did you try to run this experiment on other datasets? |
ICLR | Title
On Convergence and Stability of GANs
Abstract
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.
N/A
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.
1 INTRODUCTION
Generative modeling involves taking a set of samples drawn from an unknown data generating distribution Preal and finding an estimate Pmodel that closely resembles it. Generative adversarial networks (GAN) (Goodfellow et al., 2014) is a powerful framework used for fitting implicit generative models. The basic setup consists of two networks, the generator and the discriminator, playing against each other in a repeated zero-sum game setting. The goal here is to reach an equilibrium where Preal, Pmodel are close, and the alternating gradient updates procedure (AGD) is used to achieve this. However, this process is highly unstable and often results in mode collapse (Goodfellow, 2017). This calls for an deeper investigation into training dynamics of GANs.
In this paper, we propose studying GAN training dynamics as a repeated game in which both the players are using no-regret algorithms (Cesa-Bianchi & Lugosi, 2006) and discuss how AGD 1 falls under this paradigm. In contrast, much of the theory (Goodfellow et al., 2014; Arjovsky & Bottou, 2017) and recent developments (Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017) are based on the unrealistic assumption that the discriminator is playing optimally (in the function space) at each step and as a result, there is consistent minimization of a divergence between real and generated distributions. This corresponds to at least one player using the best-response algorithm (in the function space), and the resulting game dynamics can be completely different in both these cases (Nisan et al., 2007). Thus, there is a clear disconnect between theoretical arguments used as motivation in recent literature and what actually happens in practice.
We would like to point out that the latter view can still be useful for reasoning about the asymptotic equilibrium situation but we argue that regret minimization is the more appropriate way to think about GAN training dynamics. So, we analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We start with a short analysis of the artificial convex-concave case of the GAN game in section 2.2. This setting has a unique solution and guaranteed convergence (of averaged iterates) using no-regret algorithms can be shown with standard arguments from game theory literature. Here, we make explicit, the critical (previously not widely known) connection between AGD used in GAN training and regret minimization. This immediately yields a novel proof for the asymptotic convergence of GAN training, in the non-parametric limit. Prior to our work, such a result (Goodfellow et al., 2014) required a strong assumption that the discriminator is optimal at each step.
1Most of our analysis applies to the simultaneous gradient updates procedure as well
However, these convergence results do not hold when the game objective function is non-convex, which is the practical case when deep neural networks are used. In non-convex games, global regret minimization and equilibrium computation are computationally hard in general. Recent gametheoretic literature indicates that AGD can end up cycling (Mertikopoulos et al., 2017) or converging to a (potentially bad) local equilibrium, under some conditions (Hazan et al., 2017). We hypothesize these to be the reasons for cycling and mode collapse observed during GAN training, respectively (section 2.3). In this work, we do not explore the cycling issue but focus our attention on the mode collapse problem. In contrast to our hypothesis, the prevalent view of mode collapse and instability (Arjovsky & Bottou, 2017) is that it results from attempting to minimize a strong divergence during training. However, as we argued earlier, GAN training with AGD does not consistently minimize a divergence and therefore, such a theory is not suitable to discuss convergence or to address the stability issue.
Next, if mode collapse is indeed the result of an undesirable local equilibrium, a natural question then is how we can avoid it? We make a simple observation that, in the GAN game, mode collapse situations are often accompanied by sharp gradients of the discriminator function around some real data points (section 2.4). Therefore, a simple strategy to mitigate mode collapse is to regularize the discriminator so as to constrain its gradients in the ambient data space. We demonstrate that this improves the stability using a toy experiment with one hidden layer neural networks. This gives rise to a new explanation for why WGAN and gradient penalties might be improving the stability of GAN training – they are mitigating the mode collapse problem by keeping the gradients of the discriminator function small in data space. From this motivation, we propose a training algorithm involving a novel gradient penalty scheme called DRAGAN (Deep Regret Analytic Generative Adversarial Networks) which enables faster training, achieves improved stability and modeling performance (over WGAN-GP (Gulrajani et al., 2017) which is the state-of-the-art stable training procedure) across a variety of architectures and objective functions.
Below, we provide a short literature review. Several recent works focus on stabilizing the training of GANs. While some solutions (Radford et al., 2015; Salimans et al., 2016) require the usage of specific architectures (or) modeling objectives, some (Che et al., 2016; Zhao et al., 2016) significantly deviate from the original GAN framework. Other promising works in this direction (Metz et al., 2016; Arjovsky et al., 2017; Qi, 2017; Gulrajani et al., 2017) impose a significant computational overhead. Thus, a fast and versatile method for consistent stable training of GANs is still missing in the literature. Our work is aimed at addressing this.
To summarize, our contributions are as follows:
• We propose a new way of reasoning about the GAN training dynamics - by viewing AGD as regret minimization.
• We provide a novel proof for the asymptotic convergence of GAN training in the nonparametric limit and it does not require the discriminator to be optimal at each step.
• We discuss how AGD can converge to a potentially bad local equilibrium in non-convex games and hypothesize this to be responsible for mode collapse during GAN training.
• We characterize mode collapse situations with sharp gradients of the discriminator function around some real data points.
• A novel gradient penalty scheme called DRAGAN is introduced based on this observation and we demonstrate that it mitigates the mode collapse issue.
2 THEORETICAL ANALYSIS OF GAN TRAINING DYNAMICS
We start with a brief description of the GAN framework (section 2.1). We discuss guaranteed convergence in the artificial convex-concave case using no-regret algorithms, and make a critical connection between GAN training process (AGD) and regret minimization (section 2.2). This immediately yields a novel proof for the asymptotic convergence of GAN training in the nonparametric limit. Then, we consider the practical non-convex case and discuss how AGD can converge to a potentially bad local equilibrium here (section 2.3). We characterize mode collapse situations with sharp gradients of the discriminator function around real samples and this provides an effective strategy to avoid them. This naturally leads to the introduction of our gradient penalty
scheme DRAGAN (section 2.4). We end with a discussion and comparison with other gradient penalties in the literature (section 2.5).
2.1 BACKGROUND
The GAN framework can be viewed as a repeated zero-sum game, consisting of two players - the generator, which produces synthetic data given some noise source and the discriminator, which is trained to distinguish generator’s samples from the real data. The generator model G is parameterized by φ, takes a noise vector z as input, and produces a synthetic sampleGφ(z). The discriminator model D is parameterized by θ, takes a sample x as input and computes Dθ(x), which can be interpreted as the probability that x is real.
The models G, D can be selected from any arbitrary class of functions – in practice, GANs typical rely on deep networks for both. Their cost functions are defined as
J (D)(φ, θ) := −Ex∼preal logDθ(x)− Ez log(1−Dθ(Gφ(z))), and J (G)(φ, θ) := −J (D)(φ, θ)
And the complete game can be specified as -
min φ max θ
{ J(φ, θ) = Ex∼preal logDθ(x) + Ez log(1−Dθ(Gφ(z))) } The generator distribution Pmodel asymptotically converges to the real distribution Preal if updates are made in the function space and the discriminator is optimal at each step (Goodfellow et al., 2014).
2.2 CONVEX-CONCAVE CASE AND NO-REGRET ALGORITHMS
According to Sion’s theorem (Sion, 1958), if Φ ⊂ Rm, Θ ⊂ Rn such that they are compact and convex sets, and the function J : Φ×Θ→ R is convex in its first argument and concave in its second, then we have -
min φ∈Φ max θ∈Θ J(φ, θ) = max θ∈Θ min φ∈Φ J(φ, θ)
That is, an equilibrium is guaranteed to exist in this setting where players’ payoffs correspond to the unique value of the game (Neumann, 1928).
A natural question then is how we can find such an equilibrium. A simple procedure that players can use is best-response algorithms (BRD). In each round, best-responding players play their optimal strategy given their opponent’s current strategy. Despite its simplicity, BRD are often computationally intractable and they don’t lead to convergence even in simple games. In contrast, a technique that is both efficient and provably works is regret minimization. If both players update their parameters using no-regret algorithms, then it is easy to show that their averaged iterates will converge to an equilibrium pair (Nisan et al., 2007). Let us first define no-regret algorithms.
Definition 2.1 (No-regret algorithm). Given a sequence of convex loss functions L1, L2, . . . : K → R, an algorithm that selects a sequence of kt’s, each of which may only depend on previously observed L1, . . . , Lt−1, is said to have no regret if R(T ) T = o(1), where we define
R(T ) := ∑T t=1 Lt(kt)−mink∈K ∑T t=1 Lt(k)
We can apply no-regret learning to our problem of equilibrium finding in the GAN game J(·, ·) as follows. The generator imagines the function J(·, θt) as its loss function on round t, and similarly the discriminator imagines −J(φt, ·) as its loss function at t. After T rounds of play, each player computes the average iterates φ̄T := 1T ∑T t=1 φt and θ̄T := 1 T ∑T t=1 θt. If V
∗ is the equilibrium value of the game, and the players suffer regret R1(T ) and R2(T ) respectively, then one can show using standard arguments (Freund & Schapire, 1999) that -
V ∗ − R2(T )T ≤ maxθ∈Θ J(φ̄T , θ)− R2(T ) T ≤ minφ∈Φ J(φ, θ̄T ) + R1(T ) T ≤ V ∗ + R1(T )T .
In other words, θ̄T and φ̄T are "almost optimal" solutions to the game, where the "almost" approximation factor is given by the average regret terms R1(T )+R2(T )T . Under the no-regret
condition, the former will vanish, and hence we can guarantee convergence in the limit. Next, we define a popular family of no-regret algorithms.
Definition 2.2 (Follow The Regularized Leader). FTRL (Hazan et al., 2016) selects kt on round t by solving for arg mink∈K{ ∑t−1 s=1 Ls(k) + 1 ηΩ(k)}, where Ω(·) is some convex regularization function and η is a learning rate.
Remark: Roughly speaking, if you select the regularization as Ω(·) = 12‖ · ‖ 2, then FTRL becomes the well-known online gradient descent or OGD (Zinkevich, 2003). Ignoring the case of constraint violations, OGD can be written in a simple iterative form: kt = kt−1 − η∇Lt−1(kt−1). The typical GAN training procedure using alternating gradient updates (or simultaneous gradient updates) is almost this - both the players applying online gradient descent. Notice that the min/max objective function in GANs involves a stochastic component, with two randomized inputs given on each round, x and z which are sampled from the data distribution and a standard multivariate normal, respectively. Let us write Jx,z(φ, θ) := logDθ(x) + log(1−Dθ(Gφ(z))). Taking expectations with respect to x and z, we define the full (non-stochastic) game as J(φ, θ) = Ex,z [Jx,z(φ, θ)]. But the above online training procedure is still valid with stochastic inputs. That is, the equilibrium computation would proceed similarly, where on each round we sample xt and zt, and follow the updates
φt+1 ← φt − η∇φJxt,zt(φt, θt). and θt+1 ← θt + η ′ ∇θJxt,zt(φt, θt)
On a side note, a benefit of this stochastic perspective is that we can get a generalization bound on the mean parameters φ̄T after T rounds of optimization. The celebrated "online-to-batch conversion" (Cesa-Bianchi et al., 2004) implies that Ex,z[Jx,z(φ̄T , θ)], for any θ, is no more than the optimal value Ex,z[Jx,z(φ∗, θ)] plus an "estimation error" bounded by E [ R1(T )+R2(T )
T
] , where the expectation is
taken with respect to the sequence of samples observed along the way, and any randomness in the algorithm. Analogously, this applies to θ̄T as well. A limitation of this result, however, is that it requires a fresh sample xt to be used on every round.
To summarize, we discussed in this subsection about how the artificial convex-concave case is easy to solve through regret minimization. While this is a standard result in game theory and online learning literature, it is not widely known in the GAN literature. For instance, Salimans et al. (2016) and Goodfellow (2017) discuss a toy game which is convex-concave and show cycling behavior. But, the simple solution in that case is to just average the iterates. Further, we made explicit, the critical connection between regret minimization and alternating gradient updates procedure used for GAN training. Now, Goodfellow et al. (2014) argue that, if G and D have enough capacity (in the non-parametric limit) and updates are made in the function space, then the GAN game can be considered convex-concave. Thus, our analysis based on regret minimization immediately yields a novel proof for the asymptotic convergence of GANs, without requiring that the discriminator be optimal at each step.
Moreover, the connection between regret minimization and GAN training process gives a novel way to reason about its dynamics. In contrast, the popular view of GAN training as consistently minimizing a divergence arises if the discriminator uses BRD (in the function space) and thus, it has little to do with the actual training process of GANs. As a result, this calls into question the motivation behind many recent developments like WGAN and gradient penalties among others, which improve the training stability of GANs. In the next subsection, we discuss the practical non-convex case and why training instability arises. This provides the necessary ideas to investigate mode collapse from our new perspective.
2.3 NON-CONVEX CASE AND LOCAL EQUILIBRIA
In practice, we choose G, D to be deep neural networks and the function J(φ, θ) need not be convexconcave anymore. The nice properties we had in the convex-concave case like the existence of a unique solution and guaranteed convergence through regret minimization no longer hold. In fact, regret minimization and equilibrium computation are computationally hard in general non-convex settings. However, analogous to the case of non-convex optimization (also intractable) where we focus on finding local minima, we can look for tractable solution concepts in non-convex games.
Recent work by Hazan et al. (2017) introduces the notion of local regret and shows that if both the players use a smoothed variant of OGD to minimize this quantity, then the non-convex game converges to some form of local equilibrium, under mild assumptions. The usual training procedure of GANs (AGD) corresponds to using a window size of 1 in their formulation. Thus, GAN training will eventually converge (approximately) to a local equilibrium which is described below or the updates will cycle. We leave it to future works to explore the equally important cycling issue and focus here on the former case.
Definition 2.3 (Local Equilibrium). A pair (φ∗, θ∗) is called an -approximate local equilibrium if it holds that
∀φ ′ , ||φ ′ − φ∗|| ≤ η : J(φ∗, θ∗) ≤ J(φ ′ , θ∗) + ∀θ ′ , ||θ ′ − θ∗|| ≤ η : J(φ∗, θ∗) ≥ J(φ∗, θ ′ )−
That is, in a local equilibrium, both the players do not have much of an incentive to switch to any other strategy within a small neighborhood of their current strategies. Now, we turn our attention to the mode collapse issue which poses a significant challenge to the GAN training process. The training is said to have resulted in mode collapse if the generator ends up mapping multiple z vectors to the same output x, which is assigned a high probability of being real by the discriminator (Goodfellow, 2017). We hypothesize this to be the result of the game converging to bad local equilibria.
The prevalent view of mode collapse and instability in GAN training (Arjovsky & Bottou, 2017) is that it is caused due to the supports of real and model distributions being disjoint or lying on low-dimensional manifolds. The argument is that this would result in strong distance measures like KL-divergence or JS-divergence getting maxed out, and the generator cannot get useful gradients to learn. In fact, this is the motivation for the introduction of WGAN (Arjovsky et al., 2017). But, as we argued earlier, GAN training does not consistently minimize a divergence as that would require using intractable best-response algorithms. Hence, such a theory is not suitable to discuss convergence or to address the instability of GAN training. Our new view of GAN training process as regret minimization is closer to what is used in practice and provides an alternate explanation for mode collapse - the existence of undesirable local equilibria. The natural question now is how we can avoid them?
2.4 MODE COLLAPSE AND GRADIENT PENALTIES
The problem of dealing with multiple equilibria in games and how to avoid undesirable ones is an important question in algorithmic game theory (Nisan et al., 2007). In this work, we constrain ourselves to the GAN game and aim to characterize the undesirable local equilibria (mode collapse) in an effort to avoid them. In this direction, after empirically studying multiple mode collapse cases, we found that it is often accompanied by the discriminator function having sharp gradients around some real data points (See Figure 1 2). This intuitively makes sense from the definition of mode collapse discussed earlier. Such sharp gradients encourage the generator to map multiple z vectors to a single output x and lead the game towards a degenerate equilibrium. Now, a simple strategy to mitigate this failure case would be to regularize the discriminator using the following penalty -
λ · Ex∼Preal,δ∼Nd(0,cI) [ ‖∇xDθ(x+ δ)‖2 ] This strategy indeed improves the stability of GAN training. We show the results of a toy experiment with one hidden layer neural networks in Figure 2 and Figure 3 to demonstrate this. This partly explains the success of WGAN and gradient penalties in the recent literature (Gulrajani et al., 2017; Qi, 2017), and why they improve the training stability of GANs, despite being motivated by reasoning based on unrealistic assumptions. However, we noticed that this scheme in its current form can be brittle and if over-penalized, the discriminator can end up assigning both a real point x and noise x+ δ, the same probability of being real. Thus, a better choice of penalty is -
λ · Ex∼Preal,δ∼Nd(0,cI) [ max ( 0, ‖∇xDθ(x+ δ)‖2 − k ) ] Finally, due to practical optimization considerations (this has also been observed in Gulrajani et al. (2017)), we instead use the penalty shown below in all our experiments.
λ · Ex∼Preal,δ∼Nd(0,cI) [ ‖∇xDθ(x+ δ)‖ − k ]2 (1)
2At times, stochasticity seems to help in getting out of the basin of attraction of a bad equilibrium
This still works as long as small perturbations of real data, x+ δ are likely to lie off the data-manifold, which is true in the case of image domain and some other settings. Because, in these cases, we do want our discriminator to assign different probabilities of being real to training data and noisy samples. We caution the practitioners to keep this important point in mind while making their choice of penalty. All of the above schemes have the same effect of constraining the norm of discriminator’s gradients around real points to be small and can therefore, mitigate the mode collapse situation. We refer to GAN training using these penalty schemes or heuristics as the DRAGAN algorithm.
Additional details:
• We use the vanilla GAN objective in our experiments, but our penalty improves stability using other objective functions as well. This is demonstrated in section 3.3.
• The penalty scheme used in our experiments is the one shown in equation 1.
• We use small pixel-level noise but it is possible to find better ways of imposing this penalty. However, this exploration is beyond the scope of our paper.
• The optimal configuration of the hyperparameters for DRAGAN depends on the architecture, dataset and data domain. We set them to be λ ∼ 10, k = 1 and c ∼ 10 in most of our experiments.
2.5 COUPLED VS LOCAL PENALTIES
Several recent works have also proposed regularization schemes which constrain the discriminator’s gradients in the ambient data space, so as to improve the stability of GAN training. Despite being from different motivations, WGAN-GP and LS-GAN are closely related approaches to ours. First, we show that these two approaches are very similar, which is not widely known in the literature. Qi (2017) introduced LS-GAN with the idea of maintaining a margin between losses assigned to real and
fake samples. Further, they also impose Lipschitz constraint on D and the two conditions together result in a situation where the following holds for any real and fake sample pair (roughly) -
Dθ(x)−Dθ(Gφ(z)) ≈ ||x,Gφ(z)|| (2)
The authors argue that the resulting discriminator function would have non-vanishing gradients almost everywhere between real and fake samples (section 6 of Qi (2017)). Next, Gulrajani et al. (2017) proposed an extension to address various shortcomings of the original WGAN and they impose the following condition on D - ||∇xDθ(x̂)|| ≈ 1 (3) where x̂ = ( )x+(1− )Gφ(z) is some point on the line between a real and a fake sample, both chosen independently at random. This leads to D having norm-1 gradients almost everywhere between real and fake samples. Notice that this behavior is very similar to that of LS-GAN’s discriminator function. Thus, WGAN-GP is a slight variation of the original LS-GAN algorithm and we refer to these methods as “coupled penalties”.
On a side note, we also want to point out that WGAN-GP’s penalty doesn’t actually follow from KR-duality as claimed in their paper. By Lemma 1 of Gulrajani et al. (2017), the optimal discriminator D∗ will have norm-1 gradients (almost everywhere) only between those x and Gφ(z) pairs which are sampled from the optimal coupling or joint distribution π∗. Therefore, there is no basis for WGAN-GP’s penalty (equation 3) where arbitrary pairs of real and fake samples are used. This fact adds more credence to our theory regarding why gradient penalties might be mitigating mode collapse.
The most important distinction between coupled penalties and our methods is that we only impose gradient constraints in local regions around real samples. We refer to these penalty schemes as “local penalties”. Coupled penalties impose gradient constraints between real and generated samples and we point out some potential issues that arise from this:
• With adversarial training finding applications beyond fitting implicit generative models, penalties which depend on generated samples can be prohibitive.
• The resulting class of functions when coupled penalties are used will be highly restricted compared to our method and this affects modeling performance. We refer the reader to Figure 4 and appendix section 5.2.2 to see this effect.
• Our algorithm works with AGD, while WGAN-GP needs multiple inner iterations to optimize D. This is because the generated samples can be anywhere in the data space and they change from one iteration to the next. In contrast, we consistently regularize Dθ(x) only along the real data manifold.
To conclude, appropriate constraining of the discriminator’s gradients can mitigate mode collapse but we should be careful so that it doesn’t have any negative effects. We pointed out some issues with coupled penalties and how local penalties can help. We refer the reader to section 3 for further experimental results.
3 EXPERIMENTAL RESULTS
In section 3.1, we compare the modeling performance of our algorithm against vanilla GAN and WGAN variants in the standard DCGAN/CIFAR-10 setup. Section 3.2 demonstrates DRAGAN’s improved stability across a variety of architectures. In section 3.3, we show that our method also works with other objective functions. Appendix contains samples for inspection, some of the missing plots and additional results. Throughout, we use inception score (Salimans et al., 2016) which is a well-studied and reliable metric in the literature, and sample quality to measure the performance.
3.1 INCEPTION SCORES FOR CIFAR-10 USING DCGAN ARCHITECTURE
DCGAN is a family of architectures designed to perform well with the vanilla training procedure. They are ubiquitous in the GAN literature owing to the instability of vanilla GAN in general settings. We use this architecture to model CIFAR-10 and compare against vanilla GAN, WGAN and WGANGP. As WGANs need 5 discriminator iterations for every generator iteration, comparing the modeling performance can be tricky. To address this, we report two scores for vanilla GAN and DRAGAN - one using the same number of generator iterations as WGANs and one using the same number of discriminator iterations. The results are shown in Figure 5 and samples are included in the appendix (Figure 8). Notice that DRAGAN beats WGAN variants in both the configurations, while vanilla GAN is only slightly better. A key point to note here is that our algorithm is fast compared to WGANs, so in practice, the performance will be closer to the DRAGANd case. In the next section, we will show that if we move away from this specific architecture family, vanilla GAN training can become highly unstable and that DRAGAN penalty mitigates this issue.
3.2 MEASURING STABILITY AND PERFORMANCE ACROSS ARCHITECTURES
Ideally, we would want our training procedure to perform well in a stable fashion across a variety of architectures (other than DCGANs). Similar to Arjovsky et al. (2017) and Gulrajani et al. (2017), we remove the stabilizing components of DCGAN architecture and demonstrate improved stability & modeling performance compared to vanilla GAN training (see appendix section 5.2.3). However, this is a small set of architectures and it is not clear if there is an improvement in general.
To address this, we introduce a metric termed the BogoNet score to compare the stability & performance of different GAN training procedures. The basic idea is to choose random architectures for players G and D independently, and evaluate the performance of different algorithms in the resulting games. A good algorithm should achieve stable performance without failing to learn or resulting in mode collapse, despite the potentially imbalanced architectures. In our experiment, each player is assigned a network from a diverse pool of architectures belonging to three different families (MLP, ResNet, DCGAN).
To demonstrate that our algorithm performs better compared to vanilla GAN training and WGAN-GP, we created 100 such instances of hard games. Each instance is trained using these algorithms on CIFAR-10 (under similar conditions for a fixed number of generator iterations, which gives a slight advantage to WGAN-GP) and we plot how inception score changes over time. For each algorithm, we calculated the average of final inception scores and area under the curve (AUC) over all 100 instances. The results are shown in Table 1. Notice that we beat the other algorithms in both metrics, which indicates some improvement in stability and modeling performance.
Further, we perform some qualitative analysis to verify that BogoNet score indeed captures the improvements in stability. We create another set of 50 hard architectures and compare DRAGAN against vanilla GAN training. Each instance is allotted 5 points and we split this bounty between the two algorithms depending on their performance. If both perform well or perform poorly, they get 2.5 points each, so that we nullify the effect of such non-differentiating architectures. However, if one algorithm achieves stable performance compared to the other (in terms of failure to learn or mode collapses), we assign it higher portions of the bounty. Results were judged by two of the authors in a blind manner: The curves were shown side-by-side with the choice of algorithm for each side being randomized and unlabeled. The vanilla GAN received an average score of 92.5 while our algorithm achieved an average score of 157.5 and this correlates with BogoNet score from earlier. See appendix section 5.3 for some additional details regarding this experiment.
3.3 STABILITY USING DIFFERENT OBJECTIVE FUNCTIONS
Our algorithm improves stability across a variety of objective functions and we demonstrate this using the following experiment. Nowozin et al. (2016) show that we can interpret GAN training as minimizing various f -divergences when an appropriate game objective function is used. We show experiments using the objective functions developed for Forward KL, Reverse KL, Pearson χ2, Squared Hellinger, and Total Variation divergence minimization. We use a hard architecture from the previous subsection to demonstrate the improvements in stability. Our algorithm is stable in all cases except for the total variation case, while the vanilla algorithm failed in all the cases (see Figure 6 for two examples and Figure 15 in appendix for all five). Thus, practitioners can now choose their game objective from a larger set of functions and use DRAGAN (unlike WGANs which requires a specific objective function).
4 CONCLUSIONS
In this paper, we propose to study GAN training process as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view and hypothesize that mode collapse occurs due to the existence of undesirable local equilibria. A simple observation is made about how the mode collapse situation often exhibits sharp gradients of the discriminator function around some real data points. This characterization partly explains the workings of previously proposed WGAN and gradient penalties, and motivates our novel penalty scheme. We show evidence of improved stability using DRAGAN and the resulting improvements in modeling performance across a variety of settings. We leave it to future works to explore our ideas in more depth and come up with improved training algorithms.
5 APPENDIX
5.1 SAMPLES AND LATENT SPACE WALKS
In this section, we provide samples from an additional experiment run on CelebA dataset (Figure 7). The samples from the experiment in section 3.1 are shown in Figure 8. Further, Radford et al. (2015) suggest that walking on the manifold learned by the generator can expose signs of memorization. We use DCGAN architecture to model MNIST and CelebA datasets using DRAGAN penalty, and the latent space walks of the learned models are shown in Figure 9 and Figure 10. The results demonstrate that the generator is indeed learning smooth transitions between different images, when our algorithm is used.
5.2 ADDITIONAL EXPERIMENTS
5.2.1 ONE HIDDEN LAYER NETWORK TO MODEL MNIST
We design a simple experiment where G and D are both fully connected networks with just one hidden layer. Vanilla GAN performs poorly even in this simple case and we observe severe mode collapses. In contrast, our algorithm is stable throughout and obtains decent quality samples despite the constrained setup.
5.2.2 8-GAUSSIANS EXPERIMENT
We analyze the performance of WGAN-GP and DRAGAN on the 8-Gaussians dataset. As it can be seen in Figure 13, both of them approximately converge to the real distribution but notice that in the case of WGAN-GP, Dθ(x) seems overly constrained in the data space. In contrast, DRAGAN’s discriminator is more flexible.
Improved WGAN
DRAGAN
Figure 13: Comparing the performance of WGAN-GP and DRAGAN on the 8-Gaussians dataset. Orange is real samples, green is generated samples. The level sets of Dθ(x) are shown in the background, with yellow as high and purple as low.
5.2.3 STABILITY ACROSS DCGAN ARCHITECTURE VARIATIONS
DCGAN architecture has been designed following specific guidelines to make it stable (Radford et al., 2015). We restate the suggested rules here.
1. Use all-convolutional networks which learn their own spatial downsampling (discriminator) or upsampling (generator)
2. Remove fully connected hidden layers for deeper architectures
3. Use batch normalization in both the generator and the discriminator
4. Use ReLU activation in the generator for all layers except the output layer, which uses tanh
5. Use LeakyReLU activation in the discriminator for all layers
We show below that such constraints can be relaxed when using our algorithm and still maintain training stability. Below, we present a series of experiments in which we remove different stabilizing components from the DCGAN architecture and analyze the performance of our algorithm. Specifically, we choose the following four architectures which are difficult to train (in each case, we start with base DCGAN architecture and apply the changes) -
• No BN and a constant number of filters in the generator
• 4-layer 512-dim ReLU MLP generator
• tanh nonlinearities everywhere
• tanh nonlinearity in the generator and 4-layer 512-dim LeakyReLU MLP discriminator
Notice that, in each case, our algorithm is stable while the vanilla GAN training fails. A similar approach is used to demonstrate the stability of training procedures in Arjovsky et al. (2017) and Gulrajani et al. (2017).
(a) tanh activation (b) FC generator
5.2.4 STABILITY ACROSS OBJECTIVE FUNCTIONS
Due to space limitations, we only showed plots for two cases in section 3.3. Below we show the results for all five cases.
(a) Reverse KL (b) Pearson χ2
(c) Forward KL (d) Total Variation
5.3 BOGONET DETAILS
We used three families of architectures with probabilities - DCGAN (0.6), ResNet (0.2), MLP (0.2). Next, we further parameterized each family to create additional variation. For instance, the DCGAN family can result in networks with or without batch normalization, have LeakyReLU or Tanh nonlinearities. The number and width of filters, latent space dimensionality are some other possible variations in our experiment. Similarly, the number of layers and hidden units in each layer for MLP are chosen randomly. For ResNets, we chose their depth randomly. This creates a set of hard games which test the stability of a given training algorithm.
We showed qualitative analysis of the inception score plots in section 3.2 to verify that BogoNet score indeed captures the improvements in stability. Below, we show some examples of how the bounty splits were done. The plots in Figure 14 were scored as (averages are shown in DRAGAN, Vanilla GAN order):
A - (5, 0), B - (3.5, 1.5), C – (2.25, 2.75), D – (2, 3) | 1. What is the main contribution of the paper regarding Generative Adversarial Networks (GAN)?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the novelty and significance of the presented ideas?
4. Do you have any concerns or suggestions regarding the paper's organization and focus?
5. Are there any specific points or sections that need further clarification or improvement? | Review | Review
This paper contains a collection of ideas about Generative Adversarial Networks (GAN) but it is very hard for me to get the main point of this paper. I am not saying ideas are not interesting, but I think the author needs to choose the main point of the paper, and should focus on delivering in-depth studies on the main point.
1. On the game theoretic interpretations
The paper, Generative Adversarial Nets, NIPS 2014, already presented the game theoretic interpretations to GANs, so it's hard for me to think what's new in the section. Best response dynamics is not used in the conventional GAN training, because it's very hard to find the global optimal of inner minimization and outer maximization.
The convergence of online primal-dual gradient descent method in the minimax game is already well-known, but this analysis cannot be applied to the usual GAN setting because the objective is not convex-concave. I found this analysis would be very interesting if the authors can find the toy example when GAN becomes convex-concave by using different model parameterizations and/or different f-divergence, and conduct various studies on the convergence and stability on this problem.
I also found that the hypothesis on the model collapsing has very limited connection to the convex-concave case. It is OK to form the hypothesis and present an interesting research direction, but in order to make this as a main point of the paper, the author should provide more rigorous arguments or experimental studies instead of jumping to the hypothesis in two sentences. For example, if the authors can provide the toy example where GAN becomes convex-concave vs. non-convex-concave case, and how the loss function shape or gradient dynamics are changing, that will provide very valuable insights on the problem.
2. DRAGAN
As open commenters pointed out, I found it's difficult to find why we want to make the norm of the gradient to 1.
Why not 2? why not 1/2? Why 1 is very special?
In the WGAN paper, the gradient is clipped to a number less than 1, because it is a sufficient condition to being 1-Lipshitz, but this paper provides no justification on this number.
It's OK not to have the theoretical answers to the questions but in that case the authors should provide ablation experiments. For example, sweeping gradient norm target from 10^-3, 10^-2, 10^-1, 1.0, 10.0, etc and their impact on the performance.
Also scheduling regularization parameter like reducing the size of lambda exponentially would be interesting as well.
Most of those studies won't be necessary if the theory is sound. However, since this paper does not provide a justification on the magic number "1", I think it's better to include some form of ablation studies.
Note that the item 1 and item 2 are not strongly related to each other, and can be two separate papers. I recommend to choose one direction and provide in-depth study on one topic. Currently, this paper tries to present interesting ideas without very deep investigations, and I cannot recommend this paper to be published. |
ICLR | Title
Annealed Training for Combinatorial Optimization on Graphs
Abstract
The hardness of combinatorial optimization (CO) problems hinders collecting solutions for supervised learning. However, learning neural networks for CO problems is notoriously difficult given the lack of labeled data as the training gets trapped easily at local optima. We propose a simple but effective annealed training framework for CO problems in this work. In particular, we transform CO problems into unbiased energy-based models (EBMs). We carefully selected the penalties terms to make the EBMs as smooth as possible. Then we train graph neural networks to approximate the EBMs and we introduce an annealed loss function to prevent the training from being stuck at local optima near the initialization. An experimental evaluation demonstrates that our annealed training framework obtains substantial improvements. In four types of CO problems, our method achieves performance substantially better than other unsupervised neural methods on both synthetic and real-world graphs.
1 INTRODUCTION
Combinatorial Optimization (CO) problems occur whenever there is a requirement to select the best option from a finite set of alternatives. They arise in various application areas, like business, medicine, and engineering (Paschos, 2013). Many CO problems are NP-complete (Karp, 1972; Garey & Johnson, 1979). Thus, excluding the use of exact algorithms to find the optimal solution (Padberg & Rinaldi, 1991; Wolsey & Nemhauser, 1999), different heuristic methods are employed to find suitable solutions in a reasonable time (Nemhauser et al., 1978; Dorigo et al., 2006; Hopfield & Tank, 1985; Kirkpatrick et al., 1983).
Often, instances from the same combinatorial optimization problem family are solved repeatedly, giving rise to the opportunity for learning to improve the heuristic (Bengio et al., 2020). Recently, learning algorithms for CO problems has shown much promise, including supervised (Khalil et al., 2016; Gasse et al., 2019; Li et al., 2018; Selsam et al., 2018; Nair et al., 2020), unsupervised (Karalias & Loukas, 2020; Toenshoff et al., 2021), and reinforcement learning (Dai et al., 2017; Sun et al., 2020; Yolcu & Póczos, 2019; Chen & Tian, 2019) The success of supervised learning relies on labeled data. However, solving a hard problem could take several hours or even days and is computationally prohibitive (Yehuda et al., 2020). Reinforcement learning, suffering from its larger state space and lack of full differentiability, tends to be more challenging and time-consuming to train.
Unsupervised learning usually transforms a CO problem into an optimization problem with a differentiable objective function f where the minima represent discrete solutions (Hopfield & Tank, 1985; Smith, 1999; Karalias & Loukas, 2020). Although this framework allows for efficient learning on large, unlabeled datasets, it is not without challenges. The objective function is typically highly non-convex (Mezard & Montanari, 2009). During learning, the model’s parameters can easily get trapped near a local optimum close to the initialization, never reaching the optimal set of parameters. This makes unsupervised learning for CO problems extremely hard.
To address this challenge, we propose an annealed training framework. In detail, given a CO problem, we consider a tempered EBM Pτ ∝ e−f(x)/τ , where the energy function f unifies constrained or unconstrained CO problems via the big-M method, that is to say, adding large penalties for violated constraints. We derive the minimum values of the penalty coefficient in different CO problems that give us the smoothest, unbiased energy-based models. We train a graph neural network (GNN) that
predicts a variational distribution Qϕ to approximate the energy-based model Pτ . During training, we set a high initial temperature τ and decrease it gradually during the training process. When τ is large, Pτ is close to a uniform distribution and only has shallow local optima, such that the parameter θ can traverse to distant regions. When τ decreases to values small enough, the unbiased model Pτ will concentrate on the optimal solutions to the original CO problem.
The experiments are evaluated on four NP-hard graph CO problems: MIS, maximum clique, MDS, and minimum cut. On both synthetic and real-world graphs, our annealed training framework achieves excellent performance compared to other unsupervised neural methods (Toenshoff et al., 2021; Karalias & Loukas, 2020), classical algorithms (Aarts et al., 2003; Bilbro et al., 1988), and integer solvers (Gurobi Optimization). The ablation study demonstrates the importance of selecting proper penalty coefficients and cooling schedules.
In summary, our work has the following contributions:
• We propose an annealed learning framework for generic unsupervised learning on combinatorial optimization problems. It is simple to implement yet effective in improving unsupervised learning across various problems on both synthetic and real graphs. • We conducted ablation studies that show: 1) annealed training enables the parameters to escape from local optima and traverse a longer distance, 2) selecting proper penalty coefficients is essential, 3) Using initial temperature large enough is critical.
2 ANNEALED TRAINING FOR COMBINATORIAL OPTIMIZATION
We want to learn a graph neural network Gθ to solve combinatorial optimization problems. Given an instance I , the Gθ generates a feature ϕ = Gθ(I) that determines a variational distribution Qϕ, from which we decode solutions. This section presents our annealed training framework for training Gθ. We first represent CO problems via an energy-based model. Then, we define the annealed loss function and explain how it helps in training. Finally, we give a toy example to help the understanding.
2.1 ENERGY BASED MODEL
We denote the set of combinatorial optimization (CO) problems as I. An instance I ∈ I is
I = (c(·), {ψi}mi=1) := argmin x∈{0,1}n c(x) s.t. ψi(x) = 0, i = 1, ...,m (1)
where c(·) is the objective function and ψi ∈ {0, 1} indicates if the i-th constraint is satisfied. We rewrite the constrained problem into an equivalent unconstrained form via the big M method:
argmin x∈{0,1}n
f (I)(x) := c(x) + m∑ i=1 βiψi(x), βi ≥ 0 (2)
If f (I) has its smallest values on optimal solutions for equation 1, we refer it to unbiased. The selection of penalty coefficient β plays an important role in the success of training, and we will discuss our choice of β detailedly in section 3. Using unbiased f (I) as an energy to measure the fitness of a solution x, solving CO problems is converted to finding low energy states. Accordingly, we can define the unbiased energy-based models (EBMs):
P (I)τ (x) ∝ e−f (I)(x)/τ (3)
where a state x is more likely being observed than another state x′ if it has a lower energy f (I)(x) < f (I)(x′). The EBMs naturally introduce a temperature τ to control the smoothness of the distribution. When f is unbiased, it has the following property:
Proposition 2.1. Assume f is unbiased, that’s to say, all minimizers of equation 2 are feasible solutions for equation 1. When the temperature τ increases to infinity, the energy-based model Pτ converges to a uniform distribution over the whole state space {0, 1}n. When the temperature τ decreases to zero, the energy-based model Pτ converges to a uniform distribution over the optimal solutions for equation 1.
The proposition above shows that the temperature τ in unbiased EBMs provides an interpolation between a flat uniform distribution and a sharp distribution concentrated on optimal solutions. This idea is the key to the success of simulated annealing (Kirkpatrick et al., 1983) in inference tasks. We will show that the temperature also helps in learning.
2.2 TEMPERED LOSS AND PARAMETERIZATION
We want to learn a graph neural networkGθ parameterized by θ. Given an instance I ∈ I ,Gθ(I) = ϕ generates a vector ϕ that determines a variational distributionQϕ to approximate the target distribution P (I) τ . We want to minimize the KL-divergence:
DKL(Qϕ||P (I)τ ) = ∫ Qϕ(x) ( logQϕ(x)− log e−f (I)(x)/τ∑
z∈{0,1}n e −f(I)(z)/τ
) dx (4)
= 1
τ Ex∼Qϕ(·)[f
(I)(x)]−H(Qϕ) + log ∑
z∈{0,1}n e−f (I)(z)/τ (5)
where H(p) = − ∑
x p(x) log p(x) denote the entropy of a distribution p. Removing the terms not involving ϕ and multiplying the constant τ , we define our annealed loss functions for ϕ and τ as:
Lτ (ϕ, I) = Ex∼Qϕ(·)[f (I)(x)]− τH(Qϕ) (6) Lτ (θ) = EI∼I [ Ex∼QGθ(I)(·)[f (I)(x)]− τH(QGθ(I)) ] (7)
In this work, we consider the variational distribution as a product distribution:
Qϕ(x) = n∏ i=1 (1− ϕi)1−xiϕxii (8)
where ϕ ∈ [0, 1]n. Such a form is popular in learning graphical neural networks for combinatorial optimization (Li et al., 2018; Dai et al., 2020; Karalias & Loukas, 2020) for its simplicity and effectiveness. However, directly applying it to unsupervised learning is challenging. Unlike supervised learning, where the loss function cross-entropy is convex for ϕ, Lτ (ϕ, I) in unsupervised learning could be highly non-convex, especially when τ is small.
2.3 ANNEALED TRAINING
To address the non-convexity in training, we employ annealed training. In particular, we use a large initial temperature τ0 to smooth the loss function and reduce τt gradually to zero during training. From proposition 2.1, it can be seen as a curriculum learning (Bengio et al., 2009) along the interpolation path from the easier uniform distribution to a more challenging target distribution.
Why is it helpful? We need a thorough investigation of the training procedure to answer this. Since the loss function equation 7 is the expectation over the set of instances I , we use a batch of instances I1, ..., IB to calculate the empirical loss L̂τ (θ) and perform stochastic gradient descent. It gives:
∇θL̂τ (θ) = B∑ i=1 ∇θLτ (Gθ(Ii), Ii) = B∑ i=1 ∂Gθ(Ii) ∂θ ∇ϕLτ (ϕ, Ii)|ϕ=Gθ(Ii) (9)
= EI∼I [ ∂Gθ(I)
∂θ ∇ϕLτ (ϕ, I)|ϕ=Gθ(I)
] + ξ (10)
≈ EI∼I [ ∂Gθ(I)
∂θ (∇ϕLτ (ϕ, I)|ϕ=Gθ(I) + ζ)
] (11)
In equation 10, we assume the batch introduces a stochastic term ξ in gradient w.r.t. θ. In equation 11, we incorporate the stochastic term into the gradient with respect to ϕ. When we assume ζ is a Gaussian noise, the inner term g = ∇ϕLτ (ϕ, I)|ϕ=Gθ(I) + ζ performs as a stochastic Langevin gradient with respect to ϕ Welling & Teh (2011). Since the training data is sampled from a fixed distribution I ∼ I , the scale of the noise ζ is also fixed. When Lτ (ϕ, i) is unsmooth, the randomness from ζ is negligible compared to the gradient ∇Lτ (ϕ, i) and can not bring ϕ out of local optima. By introducing the temperate τ , we smooth the loss function and reduce the magnitude of ∇Lτ (ϕ, i). During the training, the annealed training performs an implicit simulated annealing (Kirkpatrick et al., 1983) for ϕ.
2.4 A TOY EXAMPLE
We look at a toy example to have a more intuitive understanding of the annealed training. Consider a MIS problem on an undirected, unweighted graph G = (V,E), the corresponding energy function f(x) is:
f(x) = − n∑
i=1
xi + ∑
(i,j)∈E
xixj (12)
Its correctness can be justified by proposition 3.1. When we use the variational distribution Qϕ in equation 8, the first term in Lτ (ϕ, I) becomes to:
Ex∼Qϕ(·)[f (I)(x)] = − n∑ i=1 ϕi + ∑ (i,j)∈E ϕiϕj (13)
and accordingly, the gradient w.r.t. ϕ is: g = −1 + 2 ∑
j∈N(i)
ϕj + τ(log ϕi − log(1− ϕi)) + ζ (14)
where we assume ζ ∼ N (0, σ2) for a very small σ. When the temperature τ = 0, ϕi will collapse to either 0 or 1 very fast. When ϕi = 1, we have g = −1 + ζ, when ϕi = 0, we have g ≥ 1 + ζ. Since σ is small, the noise ζ can hardly have an effect, and ϕ will be stuck at local optima, i.e., any maximal independent set such as figure. 1 (a). In figure. 1, we simulate the input (a) at decreasing temperatures τ = 1.0, 0.5, 0.1. When τ is large, all ϕi will be pushed to a neutral state, e.g., in the figure. 1 (b) where the difference of ϕi is at scale 10−3. In this case, the noise ζ can significantly affect the sign of the gradient g and lead to phase transitions. By gradually decreasing the temperature, ϕ collapses to the global optimum and provides correct guidance to update θ.
3 CASE STUDY
We consider four combinatorial optimization problems on graphs in this work: maximum independent set (MIS), maximum clique, minimum dominate set (MDS), and minimum cut. An undirected weighted graph can represent all problems G = (V,E,w), where V = {1, ..., n} is the set of nodes, E is the set of edges, and w is the weight function. For any i ∈ V , wi = w(i) is the weight of the node. For any (i, j) ∈ E, wij = w(i, j) is the weight of the edge. For each problem, we derive the minimum value of the penalty coefficient β such that the energy function has the lowest energy at optimal solutions, and we use the derived values to design the loss functions in our experiments.
3.1 MAXIMUM INDEPENDENT SET AND MAXIMUM CLIQUE
An independent set is a subset of the vertices S ⊆ V , such that for arbitrary i, j ∈ S, (i, j) /∈ E. The MIS problem is finding an independent set S with the largest weight. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n
c(x) := − n∑
i=1
wixi, subject to xixj = 0,∀(i, j) ∈ E (15)
We define the corresponding energy function:
f(x) := − n∑
i=1
wixi + ∑
(i,j)∈E
βijxixj (16)
Proposition 3.1. If βij ≥ max{wi, wj} for all (i, j) ∈ E, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 15 and has lower energy: f(x′) ≤ f(x).
Maximum Clique is equivalent to MIS on the complementary graph. Since a GNN is unaware of this connection, studying maximum clique for learning based approaches is still fruitful. The definition of maximum clique is in Appendix B.2 and show how to properly select the penalty coefficient here. Proposition 3.2. If βij ≥ max{wi, wj} for all (i, j) ∈ Ec, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
3.2 MINIMUM DOMINATE SET
A dominate set is a subset of the vertices S ⊆ V , where for any v ∈ V , there exists u ∈ S such that (u, v) ∈ E. The MDS problem is finding a dominate set S with the minimum weight. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n c(x) := n∑ i=1 wixi, subject to (1− xi) ∏ j∈N(i) (1− xj) = 0,∀i ∈ V (17)
We define the corresponding energy function:
f(x) := − n∑
i=1
wixi + n∑ i=1 βi(1− xi) ∏ j∈N(i) (1− xj) (18)
Proposition 3.3. If βi ≥ mink{wk : k ∈ N(i) or k = i}, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
3.3 MINIMUM CUT
A partition consists of two subsets: S and V \S. The cut cut(S) is defined as the number of weights between S and V \S. The volume of S is defined as vol(S) = ∑ i∈S di, where di is the degree of node i. The minimum cut problem is to find a S having the minimum cut, subject to the degree of S is between [D0, D1]. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n
c(x) := ∑
(i,j)∈E
xi(1− xj)wij , subject to D0 ≤ n∑
i=1
dixi ≤ D1 (19)
We define the corresponding energy function:
f(x) := ∑
(i,j)∈E
xi(1− xj)wij + β( n∑
i=1
dixi −D1)+ + β(D0 − n∑
i=1
dixi)+ (20)
Proposition 3.4. If β ≥ maxi{ ∑
j∈N(i) |wi,j |}, then any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
4 RELATED WORK
Recently, there has been a surge of interest in learning algorithms for CO problems (Bengio et al., 2020). Supervised learning is widely used. Numerous works have combined GNNs with search procedures to solve classical CO problems, such as the traveling salesman problem (Vinyals et al., 2015; Joshi et al., 2019; Prates et al., 2019), graph matching (Wang et al., 2019; 2020), quadratic assignments (Nowak et al., 2017), graph coloring (Lemos et al., 2019), and MIS (Li et al., 2018).
Erdos 0.805 ± 0.052 0.156 0.781 ± 0.644 2.158 0.986 ± 0.056 0.010 0.975 ± 0.033 0.020 Our’s 0.898 ± 0.030 0.165 0.848 ± 0.529 2.045 0.997 ± 0.020 0.010 0.986 ± 0.012 0.020 RUNCSP 0.823 ± 0.145 1.936 0.587 ± 0.312 7.282 0.912 ± 0.101 0.254 0.845 ± 0.184 4.429 RUNCSP(A)0.851 ± 0.158 1.942 0.629 ± 0.451 7.268 0.923 ± 0.188 0.281 0.877 ± 0.209 4.438
Greedy 0.761 ± 0.058 0.002 0.720 ± 0.046 0.009 0.996 ± 0.017 0.001 0.957 ± 0.037 0.006 MFA 0.784 ± 0.058 0.042 0.747 ± 0.056 0.637 0.998 ± 0.007 0.002 0.994 ± 0.010 0.003 G(0.5s) 0.864 ± 0.169 0.723 0.632 ± 0.176 1.199 1.000 ± 0.000 0.029 0.950 ± 0.191 0.441 G(1.0s) 0.972 ± 0.065 1.063 0.635 ± 0.176 1.686 1.000 ± 0.000 0.029 1.000 ± 0.000 0.462
Erdos 0.813 ± 0.067 0.279 0.735 ± 0.084 0.622 0.960 ± 0.019 0.139 0.822 ± 0.085 0.222 Our’s 0.901 ± 0.055 0.262 0.831 ± 0.078 0.594 0.988 ± 0.011 0.143 0.920 ± 0.083 0.213 RUNCSP 0.821 ± 0.131 2.045 0.574 ± 0.299 7.332 0.887 ± 0.134 0.164 0.832 ± 0.153 4.373 RUNCSP(A)0.860 ± 0.189 2.101 0.609 ± 0.381 7.294 0.895 ± 0.162 0.188 0.877 ± 0.221 4.442
Greedy 0.764 ± 0.064 0.002 0.727 ± 0.038 0.014 0.999 ± 0.002 0.001 0.959 ± 0.034 0.001 MFA 0.804 ± 0.064 0.144 0.710 ± 0.045 0.147 1.000 ± 0.000 0.005 0.994 ± 0.010 0.010 G(0.5s) 0.948 ± 0.076 0.599 0.812 ± 0.087 0.617 0.997 ± 0.035 0.061 0.976 ± 0.065 0.382 G(1.0s) 0.984 ± 0.042 0.705 0.847 ± 0.101 1.077 0.999 ± 0.015 0.062 0.997 ± 0.029 0.464
Another fruitful direction is combining learning with existing solvers. For example, in the branch and bound algorithm, He et al. (2014); Khalil et al. (2016); Gasse et al. (2019); Nair et al. (2020) learn the variable selection policy by imitating the decision of oracle or rules designed by human experts. However, the success of supervised learning relies on large labeled datasets, which is hard to efficiently generate in an unbiased and representative manner (Yehuda et al., 2020).
Many works, therefore, choose to use reinforcement learning instead. Dai et al. (2017) combines Q-learning with greedy algorithms to solve CO problems on graphs. Q-learning is also used in (Bai et al., 2020) for maximum subgraph problem. Sun et al. (2020) uses an evolutionary strategy to learn variable selection in the branch and bound algorithm. Yolcu & Póczos (2019) employs REINFORCE algorithm to learn local heuristics for SAT problems. Chen & Tian (2019) uses actor-critic learning to learn a local rewriting algorithm. Despite being a promising approach that avoids using labeled data, reinforcement learning is typically sample inefficient and notoriously unstable to train due to poor gradient estimations, correlations present in the sequence of observations, and hard explorations (Espeholt et al., 2018; Tang et al., 2017).
Works in unsupervised learning show promising results. In initial attempts, Hopfield & Tank (1985); Van den Bout & Miller (1989); Ramanujam & Sadayappan (1995) transform CO problems into optimization problem of neural networks with differentible objective functions. More recently, a series of deep learning approaches emerges. Yao et al. (2019) train GNN for the max-cut problem by optimizing a relaxation of the cut objective, Toenshoff et al. (2021) trains RNN for maximum-SAT via maximizing the probability of its prediction. Karalias & Loukas (2020) use a GNN to predict the distribution and the graphical neural network to minimize the expectation of the objective function on this distribution. The probabilistic method provides a good framework for unsupervised learning. However, optimizing the distribution is typically non-convex (Mezard & Montanari, 2009), making the training very unstable.
5 EXPERIMENTS
5.1 SETTINGS
Dataset: For MIS and maximum clique, problems on both real and random graphs are easy (Dai et al., 2020). Hence, we follow Karalias & Loukas (2020) to use RB graphs (Xu et al., 2007), designed to
generate hard instances. We use a small dataset containing graphs with 200-300 nodes and a large dataset containing graphs with 800-1200 nodes. For MDS, we follow Dai et al. (2020) to use BA graphs with 4 attaching edges (Barabási & Albert, 1999). We also use a small dataset containing graphs with 200-300 nodes and a large dataset containing graphs with 800-1200 nodes. We also use real graph datasets Collab, Twitter from TUdataset (Morris et al., 2020). For minimum cut, we follow Karalias & Loukas (2020) and use real graph datasets including SF-295 (Yan et al., 2008), Facebook (Traud et al., 2012), and Twitter (Morris et al., 2020). For RB graphs, the optimal solution is known during the graph construction. For other problems, we generate the "ground truth" solution through Gurobi 9.5 (Gurobi Optimization) with a time limit of 3600 seconds. For synthetic datasets, we generate 2000 graphs for training, 500 for validation, and 500 for testing. For real datasets, we follow Karalias & Loukas (2020) and use a 60-20-20 split for training, validating, and testing.
Implementation: We train our graph neural network on training data with 500 epochs. We choose the penalty coefficient β at the critical point for each problem type. We use the schedule:
τk = τ0/(1 + αk) (21)
where τ0 is chosen as the Lipschitz constant of the energy function equation 2 and α is selected to make sure the final temperature τ500 = 0.001. Since the contribution of this work focuses on the training framework, the architecture of the graph neural network is not important. Hence, we provide results from applying annealing training to Karalias & Loukas (2020) and Toenshoff et al. (2021) for fair comparison, denoted as "Annealed Erdos" and "Annealed RUNCSP" respectively. In particular, the architecture from Karalias & Loukas (2020) consists of multiple layers of the Graph Isomorphism Network (Xu et al., 2018) and a graph Attention (Veličković et al., 2017). More details refer to Karalias & Loukas (2020). Moreover, the architecture from Toenshoff et al. (2021) creates a network that approximates a Constraint Language (Dechter et al., 2003) using a message-passing GNN using an LSTM for internal updates (Hochreiter & Schmidhuber, 1997). With both of these GNN architectures, after obtaining the variational distributionQϕ equation 8, we generate the solution via conditional decoding (Raghavan, 1988).
Baselines: We compare our method with unsupervised neural methods, classical algorithms, and integer programming solvers. To establish a strong baseline for neural methods, we use the Erdos GNN (Karalias & Loukas, 2020), the state-of-the-art unsupervised learning framework for combinatorial optimization problems. For maximum clique and MIS, we transform the problem to constraint programming and compare them with RUNCSP (Toenshoff et al., 2021). We also implement the annealed training version of RUNCSP and denote it as RUNCSP(A). We followed Karalias & Loukas (2020) for minimum cut and built the L1 GNN and L2 GNN. In classical algorithms, we consider greedy algorithms and mean field annealing (MFA) (Bilbro et al., 1988). MFA also runs mean field approximation (ANDERSON, 1988) to predict a variational distribution as our method. The difference is that the update rule of MFA is determined after seeing the current graph, while the parameters in GNN are trained on the whole dataset. Also, in minimum cut, we follow (Karalias & Loukas, 2020) to compare with well-known and advanced algorithms: Pageran-Nibble (Andersen et al., 2006), Capacity Releasing Diffusion (CRD) (Wang et al., 2017), Max-flow Quotient-cut Improvement (Lang & Rao, 2004), and Simple-Local (Veldt et al., 2016). For integer programming solver, we use Gurobi 9.0 (Gurobi Optimization) and set different time limits t. We denote G(ts) as Gurobi 9.0 (t s). where t is the solving time limit. One needs to notice that Gurobi has proprocessing before solving, so the actual running time can be longer than the given time limit.
5.2 RESULTS
We report the results for MIS in Table 1, the results for maximum clique in Table 2, for MDS in Table 3, for minimum cut in Table 4. More results for comparison with supervised learning methods and evaluation on very large graphs are provided in C.4 and C.5. In the MIS and maximum clique, we report the ratios computed by dividing the optimal value by the obtained value (the larger, the better). In the MDS, we report the ratios computed from the obtained value by dividing the optimal value (the larger, the better). In minimum cut, we follow Karalias & Loukas (2020) and evaluate the performance via local conductance: cut(S)/vol(S) (the smaller the better). We can see that the annealed training substantially improves the performance of Erdos across all problem types and all datasets, except for SF-295 in minimum cut, by utilizing a better-unsupervised training framework. Our method also outperforms greedy heuristics, classical algorithms such as MFA, CRD, MQI, and other learning based approaches such as RUNCSP, L1/L2 GNN. Besides, with annealed training,
the learned GNN outperforms MFA in most problems, with less number of iterations. It indicates that learning the shared patterns in graphs is helpful in solving CO problems. Comparing to integer solver, Gurobi is able to obtain good ratio on smaller graphs. On larger scale instances, our method can achieve comparable or even better results.
5.3 PARAMETER CHANGE DISTANCE
We want to stress that we use the same graph neural network as Erdos or RUNCSP, and the performance improvements come from our annealed training framework. In scatter plot 2, 3, we report the relative change for the parameters of GNN in MIS and MDS problems on the Twitter dataset. The relative change is calculated as ∥u−v∥2∥v∥2 , where v and u are vectors flattened from the parameters of GNN before and after training. For each method, we run 20 seeds. After introducing the annealed training, we see that both the ratio and the relative change of the parameters have a systematic increase, meaning the parameters of GNN can traverse to more distant regions and find better optima in annealed learning. We believe this effectively supports that annealed training prevents the training from being stuck at local optima.
6 ABLATION STUDY
We conduct an ablation study to answer two questions:
1. How does the penalty coefficient β in equation 2 influence the performance?
2. How does the annealing schedule influence the performance?
We conduct the experiments for the MDS problem on the small BA graphs from the previous section.
6.1 PENALTY COEFFICIENT
In the MDS problem, we know that the minimum penalty coefficient β needed to ensure the EBMs unbiased on the unweighted BA graphs is β = 1.0. To justify the importance to use the minimum penalty, we evaluate the performance for β = {0.0, 0.25, 0.5, 0.75, 1.0, 2.0, 3.0, 5.0}. For each β, we run experiments with five random seeds, and we report the result in Figure 4. We can see that the minimum penalty β = 1 has the best ratio. When the penalty coefficient β < 1, the EBMs
0.450 0.475 0.500 0.525 0.550 0.575 0.600 Relative change of GNN after training
0.965
0.970
0.975
0.980
0.985
0.990
Ra tio
Erdos Anneal
Figure 2: Distance in MIS 0.45 0.50 0.55 0.60 0.65 Relative change of GNN after training
0.92
0.93
0.94
0.95
0.96
Ra tio
Erdos Anneal
Figure 3: Distance in MDS 0.25 0.5 0.75 1.0 2.0 3.0 5.0 Penalty coefficient
0.80
0.85
0.90
0.95
1.00
Ra tio
Figure 4: Ablation for β
equation 3 are biased and have weights on infeasible solutions, thereby reducing the performance. When the penalty coefficient β > 1, the energy model equation 3 becomes less smooth and increases the difficulty in training. The penalty coefficient β = 1 gives the smoothest unbiased EBMs and has the best performance. We want to note that when β = 0, the loss function is non-informative, and the performance ratio can be as low as 0.3, so we do not plot its result in the figure.
6.2 ANNEALING SCHEDULE
We use the schedule equation 21 so as to make sure the potential change f/τk+1 − f/τk ≡ C is a constant for all steps k. In fact, with the schedule equation 21, the potential f/τk = (1+α(k−1))f/τ0 is a linear function w.r.t. k. Hence, we name it a linear schedule. It is possible to use other schedules, e.g. f/τk = (1+α(k− 1)) 1 2 f/τ0 and f/τk = (1+α(k− 1))3f/τ0, and we name them as concave and convex schedule. The visualization of the temperature schedule and the potential schedule is given in Figure 5. The initial temperature is also an important hyperparameter. We evaluate the initial temperature τ0 = {0.0, 0.1, 0.5, 1.0, 2.0, 5.0}. We report the results in Figure 5. We see that the performance is robust for whichever convex, linear, or concave schedule is used. The more important factor is the initial temperature τ0. The performance is reduced when τ0 is too small as the energy-based model equation 3 is not smooth enough, and the performance is robust when τ0 is large.
7 DISCUSSION
This paper proposes a generic unsupervised learning framework for combinatorial optimization problems and substantially improves the performance of the state-of-the-art method. One restriction of the current method is that it relies on condtional decoding to samle solutions from the learned variational distributions. For problems with more complex constraints, the decoded solutions might be infeasible. Hence, we believe better decoding strategies should be considered in future work.
The framework’s success relies on smoothing the loss function via critical penalty coefficients and annealed training as they effectively prevent the training from being stuck at local optima. The techniques introduced here can be potentially applied in a broader context beyond combinatorial optimization, especially in the weakly supervised learning setting like logic reasoning (Huang et al., 2021), program induction (Chen et al., 2020), question answering (Ren et al., 2021) where fine-grained supervisions are missing and required to be inferred.
A TYPES OF PROBLEMS SOLVABLE
The current method uses conditional decoding (Raghavan, 1988) to sample solutions from the learned variational distributions, which requires monotonic post-processing to make sure the final solution is feasible. For example, in the maximum independent set, the monotonic post-processing is removing nodes when conflict happens, in the minimum dominant set, the monotonic post-processing is adding nodes when a node has not been covered. Such a framework can be applied to CO problems that have trivial solutions, such as set covering problems, but can not be applied to CO problems with complicated constraints, such as vehicle routing problems.
B COMPLETE PROOF
B.1 MAXIMUM INDEPENDENT SET
In MIS, we use the energy function:
f(x) := − n∑
i=1
wixi + ∑
(i,j)∈E
βijxixj (22)
We are going to prove the following proposition. Proposition B.1. If βij ≥ min{wi, wj} for all (i, j) ∈ E, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 15 and has lower energy: f(x′) ≤ f(x).
Proof. For arbitrary x ∈ {0, 1}n, if x satisfies all constraints, we only need to let x′ = x. Else, there must exist an edge (i, j) ∈ E, such that xixj = 1. Denote k = argmin{wi, wj}, we define x′i = xi if i ̸= k and x′k = 0. In this case, we have:
f(x′)− f(x) = wk − ∑
j∈N(k)
βk,jxj ≤ wk(1− ∑
j∈N(k)
xj) ≤ 0 (23)
Thus we show f(x′) ≤ f(x). On the other side, consider a graph G = (V = {1, 2}, E = {(1, 2)}) and β12 < w1 < w2. Then the maximum independent set is {2}, which can be represented by x = (0, 1). However, in this case, let x′ = (1, 1) is feasible while f(x′) ≤ f(x). This means the condition we just derived is sharp.
B.2 MAXIMUM CLIQUE
A clique is a subset of the vertices S ⊆ V , such that every two distinct i, j ∈ S are adjacent: (i, j) ∈ E. The maximum problem is finding a clique S with the largest weight. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n
c(x) := − n∑
i=1
wixi, subject to xixj = 0,∀(i, j) ∈ Ec (24)
where Ec = {(i, j) ∈ V × V : i ̸= j, (i, j) /∈ E} is the set of complement edges on graph G. We define the corresponding energy function:
f(x) := − n∑
i=1
wixi + ∑
(i,j)∈Ec βijxixj (25)
We are going to prove the following proposition. Proposition B.2. If βij ≥ min{wi, wj} for all (i, j) ∈ Ec, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
Proof. For arbitrary x ∈ {0, 1}n, if x satisfies all constraints, we only need to let x′ = x. Else, there must exist an edge (i, j) ∈ Ec, such that xixj = 1. Denote k = argmin{wi, wj}, we define x′i = xi if i ̸= k and x′k = 0. In this case, we have:
f(x′)− f(x) = wk − ∑
j:(k,j)∈Ec βk,jxj ≤ wk(1− ∑ j:(k,j)∈Ec xj) ≤ 0 (26)
Thus we show f(x′) ≤ f(x). On the other side, consider a graph G = (V = {1, 2}, E = {}) and β12 < w1 < w2. Then the maximum clique is {2}, which can be represented by x = (0, 1). However, in this case, let x′ = (1, 1) is feasible while f(x′) ≤ f(x). This means the condition we just derived is sharp.
B.3 MINIMUM DOMINATE SET
In MIS, we use the energy function:
f(x) := n∑ i=1 wixi + n∑ i=1 βi(1− xi) ∏ j∈N(i) (1− xj) (27)
We are going to prove the following proposition. Proposition B.3. If βi ≥ mink{wk : k ∈ N(i) or k = i}, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
Proof. For arbitrary x ∈ {0, 1}n, if x satisfies all constraints, we only need to let x′ = x. Else, there must exist a node t ∈ V , such that xt = 0 and xj = 0 for all j ∈ N(t). Let k = argmin{wj : j ∈ N(t), or j = t}, we define x′i = xi if i ̸= k and x′k = 1. In this case, we have:
f(x′)− f(x) = wk − βt + ∑ i̸=t βi [ (1− x′i) ∏ j∈N(i) (1− x′j)− (1− xi) ∏ j∈N(i) (1− xj) ] ≤ 0 (28)
Thus, we prove f(x′) ≤ f(x). On the other side, consider a graph G = (V = {1}, E = {}) and β1 < w1. Then the maximum clique is {1}, which can be represented by x = (1). However, in this case, let x′ = (0) is feasible while f(x′) ≤ f(x). This means the condition we just derived is sharp.
B.4 MINIMUM CUT
In MIS, we use the energy function:
f(x) := ∑
(i,j)∈E
xi(1− xj)wij + β( n∑
i=1
dixi −D1)+ + β(D0 − n∑
i=1
dixi)+ (29)
We are going to prove the following proposition. Proposition B.4. If β ≥ maxi{ ∑ j∈N(i) |wi,j |}, then any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
C EXPERIMENT DETAILS
C.1 HARDWARE
All methods were run on Intel(R) Xeon(R) Gold 5215 CPU @ 2.50GHz, with 377GB of available RAM. The neural networks were executed on a single RTX6000 25GB graphics card. The code was executed on version 1.9.0 of PyTorch and version 1.7.2 of PyTorch Geometric.
C.2 GREEDY ALGORITHM
For MIS, greedy algorithm can be described in the following steps:
1. Pick the variable i has the smallest degree di in the candidate set.
2. Delete i and all its neighborhood N(i) = {j : (i, j) ∈ E} on the current graph. 3. Repeat step 1-2, until the current graph is empty.
For maximum clique, we first transform the graph into its complementary, then apply the greedy algorithm for MIS.
For MDS, greedy algorithm can be described in the following steps:
1. For every node i, initialize its state si = 1 to indicate whether it has not been covered. 2. For every node i, initialize its covering number ci = si + ∑
j∈N(i) sj to indicate how many nodes can be covered by selecting node i
3. Select the node i has the largest covering number ci.
4. Mark si = 0, and sj = 0 for j ∈ N(i). 5. Repeat step 3-4, until all si = 0.
C.3 DATASETS
For MIS and maximum clique, we follow Karalias & Loukas (2020) and use RB graphs (Xu et al., 2007). The construction of RB graphs has 4 parameters n, k, p. Following Karalias & Loukas (2020), for small graphs, we use n uniformly sampled from the integers [20, 25] and k uniformly sampled from [5, 12]; for large graphs, we use n uniformly sampled from the integers [40, 55] and k uniformly sampled from [20, 25].
For minimum dominant set, we follow Dai et al. (2020) to Barabasi-Albert networks (Barabási & Albert, 1999) with attachment 4.
C.4 COMPARISON TO SUPERVISED LEARNING
In order to compare our unsupervised results to supervised results, we provide evaluation results for the MDS problem on BA-4 graphs, using the supervised learning result in Dai et al. (2020). As in Table 5, we can see that annealed training significantly improves the performance of unsupervised learning and has a ratio very close to supervised learning. We also provide a comparison for supervised learning for evaluation on small RB graphs for MIS. As a source of labels, we use Gurobi to solve the maximum independent set problem. Due to the computational limitations, we set a time limit as 10 seconds and some of the instances are not solved to optimal. In this case as in Table 6, we observe that, with the proper training algorithm, the unsupervised learning can even beat the supervised learning.
C.5 EVALUATION ON VERY LARGE GRAPHS
We conduct extra experiments for MIS on large BA-4 Graphs following Dai et al. (2020). The model is trained on BA-4 graphs with size 1024-1100. For each larger size, we evaluate the methods on 100 graphs and report the mean, std, and average running time. We can see that the performance of Gurobi decreases with increasing the graph size and the learning based approaches. Another observation
is that the learning based approaches have much smaller running time, as Dai et al. (2020) has the conditional decoding implemented in cpp, while our conditional decoding is implemented in python. | 1. What is the primary contribution of the paper in the field of combinatorial optimization?
2. How does the proposed framework address the challenge of training being stuck at local optima?
3. Can you provide more information about the annealed loss function used in the framework?
4. How do the experimental results demonstrate the effectiveness of the proposed approach?
5. Are there any limitations or potential drawbacks to the methodology proposed in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a generic unsupervised learning framework for combinatorial optimization problems. It tackles the challenge of training being stuck at local optima. The main idea of this paper is to transform the CO problem equivalently to an energy-based model (EBM) and use an annealed loss function to prevent the training from being stuck at local optima. It shows substantial improvement in numerical experiments on four types of CO problems.
Strengths And Weaknesses
Strength:
The proposed framework is new and can help the training from being stuck at local optima.
The experiments seem to show a substantial improvement over existing training methods.
Weakness: None noted.
Clarity, Quality, Novelty And Reproducibility
Clarity: overall the paper is well written and clearly motivated. Novelty: I think the novelty of the methodology is fair. Though using annealed loss function for finding global optima is common in global optimization, it seems to me that it is the first time applied to training GNN for combinatorial optimization. |
ICLR | Title
Annealed Training for Combinatorial Optimization on Graphs
Abstract
The hardness of combinatorial optimization (CO) problems hinders collecting solutions for supervised learning. However, learning neural networks for CO problems is notoriously difficult given the lack of labeled data as the training gets trapped easily at local optima. We propose a simple but effective annealed training framework for CO problems in this work. In particular, we transform CO problems into unbiased energy-based models (EBMs). We carefully selected the penalties terms to make the EBMs as smooth as possible. Then we train graph neural networks to approximate the EBMs and we introduce an annealed loss function to prevent the training from being stuck at local optima near the initialization. An experimental evaluation demonstrates that our annealed training framework obtains substantial improvements. In four types of CO problems, our method achieves performance substantially better than other unsupervised neural methods on both synthetic and real-world graphs.
1 INTRODUCTION
Combinatorial Optimization (CO) problems occur whenever there is a requirement to select the best option from a finite set of alternatives. They arise in various application areas, like business, medicine, and engineering (Paschos, 2013). Many CO problems are NP-complete (Karp, 1972; Garey & Johnson, 1979). Thus, excluding the use of exact algorithms to find the optimal solution (Padberg & Rinaldi, 1991; Wolsey & Nemhauser, 1999), different heuristic methods are employed to find suitable solutions in a reasonable time (Nemhauser et al., 1978; Dorigo et al., 2006; Hopfield & Tank, 1985; Kirkpatrick et al., 1983).
Often, instances from the same combinatorial optimization problem family are solved repeatedly, giving rise to the opportunity for learning to improve the heuristic (Bengio et al., 2020). Recently, learning algorithms for CO problems has shown much promise, including supervised (Khalil et al., 2016; Gasse et al., 2019; Li et al., 2018; Selsam et al., 2018; Nair et al., 2020), unsupervised (Karalias & Loukas, 2020; Toenshoff et al., 2021), and reinforcement learning (Dai et al., 2017; Sun et al., 2020; Yolcu & Póczos, 2019; Chen & Tian, 2019) The success of supervised learning relies on labeled data. However, solving a hard problem could take several hours or even days and is computationally prohibitive (Yehuda et al., 2020). Reinforcement learning, suffering from its larger state space and lack of full differentiability, tends to be more challenging and time-consuming to train.
Unsupervised learning usually transforms a CO problem into an optimization problem with a differentiable objective function f where the minima represent discrete solutions (Hopfield & Tank, 1985; Smith, 1999; Karalias & Loukas, 2020). Although this framework allows for efficient learning on large, unlabeled datasets, it is not without challenges. The objective function is typically highly non-convex (Mezard & Montanari, 2009). During learning, the model’s parameters can easily get trapped near a local optimum close to the initialization, never reaching the optimal set of parameters. This makes unsupervised learning for CO problems extremely hard.
To address this challenge, we propose an annealed training framework. In detail, given a CO problem, we consider a tempered EBM Pτ ∝ e−f(x)/τ , where the energy function f unifies constrained or unconstrained CO problems via the big-M method, that is to say, adding large penalties for violated constraints. We derive the minimum values of the penalty coefficient in different CO problems that give us the smoothest, unbiased energy-based models. We train a graph neural network (GNN) that
predicts a variational distribution Qϕ to approximate the energy-based model Pτ . During training, we set a high initial temperature τ and decrease it gradually during the training process. When τ is large, Pτ is close to a uniform distribution and only has shallow local optima, such that the parameter θ can traverse to distant regions. When τ decreases to values small enough, the unbiased model Pτ will concentrate on the optimal solutions to the original CO problem.
The experiments are evaluated on four NP-hard graph CO problems: MIS, maximum clique, MDS, and minimum cut. On both synthetic and real-world graphs, our annealed training framework achieves excellent performance compared to other unsupervised neural methods (Toenshoff et al., 2021; Karalias & Loukas, 2020), classical algorithms (Aarts et al., 2003; Bilbro et al., 1988), and integer solvers (Gurobi Optimization). The ablation study demonstrates the importance of selecting proper penalty coefficients and cooling schedules.
In summary, our work has the following contributions:
• We propose an annealed learning framework for generic unsupervised learning on combinatorial optimization problems. It is simple to implement yet effective in improving unsupervised learning across various problems on both synthetic and real graphs. • We conducted ablation studies that show: 1) annealed training enables the parameters to escape from local optima and traverse a longer distance, 2) selecting proper penalty coefficients is essential, 3) Using initial temperature large enough is critical.
2 ANNEALED TRAINING FOR COMBINATORIAL OPTIMIZATION
We want to learn a graph neural network Gθ to solve combinatorial optimization problems. Given an instance I , the Gθ generates a feature ϕ = Gθ(I) that determines a variational distribution Qϕ, from which we decode solutions. This section presents our annealed training framework for training Gθ. We first represent CO problems via an energy-based model. Then, we define the annealed loss function and explain how it helps in training. Finally, we give a toy example to help the understanding.
2.1 ENERGY BASED MODEL
We denote the set of combinatorial optimization (CO) problems as I. An instance I ∈ I is
I = (c(·), {ψi}mi=1) := argmin x∈{0,1}n c(x) s.t. ψi(x) = 0, i = 1, ...,m (1)
where c(·) is the objective function and ψi ∈ {0, 1} indicates if the i-th constraint is satisfied. We rewrite the constrained problem into an equivalent unconstrained form via the big M method:
argmin x∈{0,1}n
f (I)(x) := c(x) + m∑ i=1 βiψi(x), βi ≥ 0 (2)
If f (I) has its smallest values on optimal solutions for equation 1, we refer it to unbiased. The selection of penalty coefficient β plays an important role in the success of training, and we will discuss our choice of β detailedly in section 3. Using unbiased f (I) as an energy to measure the fitness of a solution x, solving CO problems is converted to finding low energy states. Accordingly, we can define the unbiased energy-based models (EBMs):
P (I)τ (x) ∝ e−f (I)(x)/τ (3)
where a state x is more likely being observed than another state x′ if it has a lower energy f (I)(x) < f (I)(x′). The EBMs naturally introduce a temperature τ to control the smoothness of the distribution. When f is unbiased, it has the following property:
Proposition 2.1. Assume f is unbiased, that’s to say, all minimizers of equation 2 are feasible solutions for equation 1. When the temperature τ increases to infinity, the energy-based model Pτ converges to a uniform distribution over the whole state space {0, 1}n. When the temperature τ decreases to zero, the energy-based model Pτ converges to a uniform distribution over the optimal solutions for equation 1.
The proposition above shows that the temperature τ in unbiased EBMs provides an interpolation between a flat uniform distribution and a sharp distribution concentrated on optimal solutions. This idea is the key to the success of simulated annealing (Kirkpatrick et al., 1983) in inference tasks. We will show that the temperature also helps in learning.
2.2 TEMPERED LOSS AND PARAMETERIZATION
We want to learn a graph neural networkGθ parameterized by θ. Given an instance I ∈ I ,Gθ(I) = ϕ generates a vector ϕ that determines a variational distributionQϕ to approximate the target distribution P (I) τ . We want to minimize the KL-divergence:
DKL(Qϕ||P (I)τ ) = ∫ Qϕ(x) ( logQϕ(x)− log e−f (I)(x)/τ∑
z∈{0,1}n e −f(I)(z)/τ
) dx (4)
= 1
τ Ex∼Qϕ(·)[f
(I)(x)]−H(Qϕ) + log ∑
z∈{0,1}n e−f (I)(z)/τ (5)
where H(p) = − ∑
x p(x) log p(x) denote the entropy of a distribution p. Removing the terms not involving ϕ and multiplying the constant τ , we define our annealed loss functions for ϕ and τ as:
Lτ (ϕ, I) = Ex∼Qϕ(·)[f (I)(x)]− τH(Qϕ) (6) Lτ (θ) = EI∼I [ Ex∼QGθ(I)(·)[f (I)(x)]− τH(QGθ(I)) ] (7)
In this work, we consider the variational distribution as a product distribution:
Qϕ(x) = n∏ i=1 (1− ϕi)1−xiϕxii (8)
where ϕ ∈ [0, 1]n. Such a form is popular in learning graphical neural networks for combinatorial optimization (Li et al., 2018; Dai et al., 2020; Karalias & Loukas, 2020) for its simplicity and effectiveness. However, directly applying it to unsupervised learning is challenging. Unlike supervised learning, where the loss function cross-entropy is convex for ϕ, Lτ (ϕ, I) in unsupervised learning could be highly non-convex, especially when τ is small.
2.3 ANNEALED TRAINING
To address the non-convexity in training, we employ annealed training. In particular, we use a large initial temperature τ0 to smooth the loss function and reduce τt gradually to zero during training. From proposition 2.1, it can be seen as a curriculum learning (Bengio et al., 2009) along the interpolation path from the easier uniform distribution to a more challenging target distribution.
Why is it helpful? We need a thorough investigation of the training procedure to answer this. Since the loss function equation 7 is the expectation over the set of instances I , we use a batch of instances I1, ..., IB to calculate the empirical loss L̂τ (θ) and perform stochastic gradient descent. It gives:
∇θL̂τ (θ) = B∑ i=1 ∇θLτ (Gθ(Ii), Ii) = B∑ i=1 ∂Gθ(Ii) ∂θ ∇ϕLτ (ϕ, Ii)|ϕ=Gθ(Ii) (9)
= EI∼I [ ∂Gθ(I)
∂θ ∇ϕLτ (ϕ, I)|ϕ=Gθ(I)
] + ξ (10)
≈ EI∼I [ ∂Gθ(I)
∂θ (∇ϕLτ (ϕ, I)|ϕ=Gθ(I) + ζ)
] (11)
In equation 10, we assume the batch introduces a stochastic term ξ in gradient w.r.t. θ. In equation 11, we incorporate the stochastic term into the gradient with respect to ϕ. When we assume ζ is a Gaussian noise, the inner term g = ∇ϕLτ (ϕ, I)|ϕ=Gθ(I) + ζ performs as a stochastic Langevin gradient with respect to ϕ Welling & Teh (2011). Since the training data is sampled from a fixed distribution I ∼ I , the scale of the noise ζ is also fixed. When Lτ (ϕ, i) is unsmooth, the randomness from ζ is negligible compared to the gradient ∇Lτ (ϕ, i) and can not bring ϕ out of local optima. By introducing the temperate τ , we smooth the loss function and reduce the magnitude of ∇Lτ (ϕ, i). During the training, the annealed training performs an implicit simulated annealing (Kirkpatrick et al., 1983) for ϕ.
2.4 A TOY EXAMPLE
We look at a toy example to have a more intuitive understanding of the annealed training. Consider a MIS problem on an undirected, unweighted graph G = (V,E), the corresponding energy function f(x) is:
f(x) = − n∑
i=1
xi + ∑
(i,j)∈E
xixj (12)
Its correctness can be justified by proposition 3.1. When we use the variational distribution Qϕ in equation 8, the first term in Lτ (ϕ, I) becomes to:
Ex∼Qϕ(·)[f (I)(x)] = − n∑ i=1 ϕi + ∑ (i,j)∈E ϕiϕj (13)
and accordingly, the gradient w.r.t. ϕ is: g = −1 + 2 ∑
j∈N(i)
ϕj + τ(log ϕi − log(1− ϕi)) + ζ (14)
where we assume ζ ∼ N (0, σ2) for a very small σ. When the temperature τ = 0, ϕi will collapse to either 0 or 1 very fast. When ϕi = 1, we have g = −1 + ζ, when ϕi = 0, we have g ≥ 1 + ζ. Since σ is small, the noise ζ can hardly have an effect, and ϕ will be stuck at local optima, i.e., any maximal independent set such as figure. 1 (a). In figure. 1, we simulate the input (a) at decreasing temperatures τ = 1.0, 0.5, 0.1. When τ is large, all ϕi will be pushed to a neutral state, e.g., in the figure. 1 (b) where the difference of ϕi is at scale 10−3. In this case, the noise ζ can significantly affect the sign of the gradient g and lead to phase transitions. By gradually decreasing the temperature, ϕ collapses to the global optimum and provides correct guidance to update θ.
3 CASE STUDY
We consider four combinatorial optimization problems on graphs in this work: maximum independent set (MIS), maximum clique, minimum dominate set (MDS), and minimum cut. An undirected weighted graph can represent all problems G = (V,E,w), where V = {1, ..., n} is the set of nodes, E is the set of edges, and w is the weight function. For any i ∈ V , wi = w(i) is the weight of the node. For any (i, j) ∈ E, wij = w(i, j) is the weight of the edge. For each problem, we derive the minimum value of the penalty coefficient β such that the energy function has the lowest energy at optimal solutions, and we use the derived values to design the loss functions in our experiments.
3.1 MAXIMUM INDEPENDENT SET AND MAXIMUM CLIQUE
An independent set is a subset of the vertices S ⊆ V , such that for arbitrary i, j ∈ S, (i, j) /∈ E. The MIS problem is finding an independent set S with the largest weight. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n
c(x) := − n∑
i=1
wixi, subject to xixj = 0,∀(i, j) ∈ E (15)
We define the corresponding energy function:
f(x) := − n∑
i=1
wixi + ∑
(i,j)∈E
βijxixj (16)
Proposition 3.1. If βij ≥ max{wi, wj} for all (i, j) ∈ E, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 15 and has lower energy: f(x′) ≤ f(x).
Maximum Clique is equivalent to MIS on the complementary graph. Since a GNN is unaware of this connection, studying maximum clique for learning based approaches is still fruitful. The definition of maximum clique is in Appendix B.2 and show how to properly select the penalty coefficient here. Proposition 3.2. If βij ≥ max{wi, wj} for all (i, j) ∈ Ec, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
3.2 MINIMUM DOMINATE SET
A dominate set is a subset of the vertices S ⊆ V , where for any v ∈ V , there exists u ∈ S such that (u, v) ∈ E. The MDS problem is finding a dominate set S with the minimum weight. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n c(x) := n∑ i=1 wixi, subject to (1− xi) ∏ j∈N(i) (1− xj) = 0,∀i ∈ V (17)
We define the corresponding energy function:
f(x) := − n∑
i=1
wixi + n∑ i=1 βi(1− xi) ∏ j∈N(i) (1− xj) (18)
Proposition 3.3. If βi ≥ mink{wk : k ∈ N(i) or k = i}, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
3.3 MINIMUM CUT
A partition consists of two subsets: S and V \S. The cut cut(S) is defined as the number of weights between S and V \S. The volume of S is defined as vol(S) = ∑ i∈S di, where di is the degree of node i. The minimum cut problem is to find a S having the minimum cut, subject to the degree of S is between [D0, D1]. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n
c(x) := ∑
(i,j)∈E
xi(1− xj)wij , subject to D0 ≤ n∑
i=1
dixi ≤ D1 (19)
We define the corresponding energy function:
f(x) := ∑
(i,j)∈E
xi(1− xj)wij + β( n∑
i=1
dixi −D1)+ + β(D0 − n∑
i=1
dixi)+ (20)
Proposition 3.4. If β ≥ maxi{ ∑
j∈N(i) |wi,j |}, then any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
4 RELATED WORK
Recently, there has been a surge of interest in learning algorithms for CO problems (Bengio et al., 2020). Supervised learning is widely used. Numerous works have combined GNNs with search procedures to solve classical CO problems, such as the traveling salesman problem (Vinyals et al., 2015; Joshi et al., 2019; Prates et al., 2019), graph matching (Wang et al., 2019; 2020), quadratic assignments (Nowak et al., 2017), graph coloring (Lemos et al., 2019), and MIS (Li et al., 2018).
Erdos 0.805 ± 0.052 0.156 0.781 ± 0.644 2.158 0.986 ± 0.056 0.010 0.975 ± 0.033 0.020 Our’s 0.898 ± 0.030 0.165 0.848 ± 0.529 2.045 0.997 ± 0.020 0.010 0.986 ± 0.012 0.020 RUNCSP 0.823 ± 0.145 1.936 0.587 ± 0.312 7.282 0.912 ± 0.101 0.254 0.845 ± 0.184 4.429 RUNCSP(A)0.851 ± 0.158 1.942 0.629 ± 0.451 7.268 0.923 ± 0.188 0.281 0.877 ± 0.209 4.438
Greedy 0.761 ± 0.058 0.002 0.720 ± 0.046 0.009 0.996 ± 0.017 0.001 0.957 ± 0.037 0.006 MFA 0.784 ± 0.058 0.042 0.747 ± 0.056 0.637 0.998 ± 0.007 0.002 0.994 ± 0.010 0.003 G(0.5s) 0.864 ± 0.169 0.723 0.632 ± 0.176 1.199 1.000 ± 0.000 0.029 0.950 ± 0.191 0.441 G(1.0s) 0.972 ± 0.065 1.063 0.635 ± 0.176 1.686 1.000 ± 0.000 0.029 1.000 ± 0.000 0.462
Erdos 0.813 ± 0.067 0.279 0.735 ± 0.084 0.622 0.960 ± 0.019 0.139 0.822 ± 0.085 0.222 Our’s 0.901 ± 0.055 0.262 0.831 ± 0.078 0.594 0.988 ± 0.011 0.143 0.920 ± 0.083 0.213 RUNCSP 0.821 ± 0.131 2.045 0.574 ± 0.299 7.332 0.887 ± 0.134 0.164 0.832 ± 0.153 4.373 RUNCSP(A)0.860 ± 0.189 2.101 0.609 ± 0.381 7.294 0.895 ± 0.162 0.188 0.877 ± 0.221 4.442
Greedy 0.764 ± 0.064 0.002 0.727 ± 0.038 0.014 0.999 ± 0.002 0.001 0.959 ± 0.034 0.001 MFA 0.804 ± 0.064 0.144 0.710 ± 0.045 0.147 1.000 ± 0.000 0.005 0.994 ± 0.010 0.010 G(0.5s) 0.948 ± 0.076 0.599 0.812 ± 0.087 0.617 0.997 ± 0.035 0.061 0.976 ± 0.065 0.382 G(1.0s) 0.984 ± 0.042 0.705 0.847 ± 0.101 1.077 0.999 ± 0.015 0.062 0.997 ± 0.029 0.464
Another fruitful direction is combining learning with existing solvers. For example, in the branch and bound algorithm, He et al. (2014); Khalil et al. (2016); Gasse et al. (2019); Nair et al. (2020) learn the variable selection policy by imitating the decision of oracle or rules designed by human experts. However, the success of supervised learning relies on large labeled datasets, which is hard to efficiently generate in an unbiased and representative manner (Yehuda et al., 2020).
Many works, therefore, choose to use reinforcement learning instead. Dai et al. (2017) combines Q-learning with greedy algorithms to solve CO problems on graphs. Q-learning is also used in (Bai et al., 2020) for maximum subgraph problem. Sun et al. (2020) uses an evolutionary strategy to learn variable selection in the branch and bound algorithm. Yolcu & Póczos (2019) employs REINFORCE algorithm to learn local heuristics for SAT problems. Chen & Tian (2019) uses actor-critic learning to learn a local rewriting algorithm. Despite being a promising approach that avoids using labeled data, reinforcement learning is typically sample inefficient and notoriously unstable to train due to poor gradient estimations, correlations present in the sequence of observations, and hard explorations (Espeholt et al., 2018; Tang et al., 2017).
Works in unsupervised learning show promising results. In initial attempts, Hopfield & Tank (1985); Van den Bout & Miller (1989); Ramanujam & Sadayappan (1995) transform CO problems into optimization problem of neural networks with differentible objective functions. More recently, a series of deep learning approaches emerges. Yao et al. (2019) train GNN for the max-cut problem by optimizing a relaxation of the cut objective, Toenshoff et al. (2021) trains RNN for maximum-SAT via maximizing the probability of its prediction. Karalias & Loukas (2020) use a GNN to predict the distribution and the graphical neural network to minimize the expectation of the objective function on this distribution. The probabilistic method provides a good framework for unsupervised learning. However, optimizing the distribution is typically non-convex (Mezard & Montanari, 2009), making the training very unstable.
5 EXPERIMENTS
5.1 SETTINGS
Dataset: For MIS and maximum clique, problems on both real and random graphs are easy (Dai et al., 2020). Hence, we follow Karalias & Loukas (2020) to use RB graphs (Xu et al., 2007), designed to
generate hard instances. We use a small dataset containing graphs with 200-300 nodes and a large dataset containing graphs with 800-1200 nodes. For MDS, we follow Dai et al. (2020) to use BA graphs with 4 attaching edges (Barabási & Albert, 1999). We also use a small dataset containing graphs with 200-300 nodes and a large dataset containing graphs with 800-1200 nodes. We also use real graph datasets Collab, Twitter from TUdataset (Morris et al., 2020). For minimum cut, we follow Karalias & Loukas (2020) and use real graph datasets including SF-295 (Yan et al., 2008), Facebook (Traud et al., 2012), and Twitter (Morris et al., 2020). For RB graphs, the optimal solution is known during the graph construction. For other problems, we generate the "ground truth" solution through Gurobi 9.5 (Gurobi Optimization) with a time limit of 3600 seconds. For synthetic datasets, we generate 2000 graphs for training, 500 for validation, and 500 for testing. For real datasets, we follow Karalias & Loukas (2020) and use a 60-20-20 split for training, validating, and testing.
Implementation: We train our graph neural network on training data with 500 epochs. We choose the penalty coefficient β at the critical point for each problem type. We use the schedule:
τk = τ0/(1 + αk) (21)
where τ0 is chosen as the Lipschitz constant of the energy function equation 2 and α is selected to make sure the final temperature τ500 = 0.001. Since the contribution of this work focuses on the training framework, the architecture of the graph neural network is not important. Hence, we provide results from applying annealing training to Karalias & Loukas (2020) and Toenshoff et al. (2021) for fair comparison, denoted as "Annealed Erdos" and "Annealed RUNCSP" respectively. In particular, the architecture from Karalias & Loukas (2020) consists of multiple layers of the Graph Isomorphism Network (Xu et al., 2018) and a graph Attention (Veličković et al., 2017). More details refer to Karalias & Loukas (2020). Moreover, the architecture from Toenshoff et al. (2021) creates a network that approximates a Constraint Language (Dechter et al., 2003) using a message-passing GNN using an LSTM for internal updates (Hochreiter & Schmidhuber, 1997). With both of these GNN architectures, after obtaining the variational distributionQϕ equation 8, we generate the solution via conditional decoding (Raghavan, 1988).
Baselines: We compare our method with unsupervised neural methods, classical algorithms, and integer programming solvers. To establish a strong baseline for neural methods, we use the Erdos GNN (Karalias & Loukas, 2020), the state-of-the-art unsupervised learning framework for combinatorial optimization problems. For maximum clique and MIS, we transform the problem to constraint programming and compare them with RUNCSP (Toenshoff et al., 2021). We also implement the annealed training version of RUNCSP and denote it as RUNCSP(A). We followed Karalias & Loukas (2020) for minimum cut and built the L1 GNN and L2 GNN. In classical algorithms, we consider greedy algorithms and mean field annealing (MFA) (Bilbro et al., 1988). MFA also runs mean field approximation (ANDERSON, 1988) to predict a variational distribution as our method. The difference is that the update rule of MFA is determined after seeing the current graph, while the parameters in GNN are trained on the whole dataset. Also, in minimum cut, we follow (Karalias & Loukas, 2020) to compare with well-known and advanced algorithms: Pageran-Nibble (Andersen et al., 2006), Capacity Releasing Diffusion (CRD) (Wang et al., 2017), Max-flow Quotient-cut Improvement (Lang & Rao, 2004), and Simple-Local (Veldt et al., 2016). For integer programming solver, we use Gurobi 9.0 (Gurobi Optimization) and set different time limits t. We denote G(ts) as Gurobi 9.0 (t s). where t is the solving time limit. One needs to notice that Gurobi has proprocessing before solving, so the actual running time can be longer than the given time limit.
5.2 RESULTS
We report the results for MIS in Table 1, the results for maximum clique in Table 2, for MDS in Table 3, for minimum cut in Table 4. More results for comparison with supervised learning methods and evaluation on very large graphs are provided in C.4 and C.5. In the MIS and maximum clique, we report the ratios computed by dividing the optimal value by the obtained value (the larger, the better). In the MDS, we report the ratios computed from the obtained value by dividing the optimal value (the larger, the better). In minimum cut, we follow Karalias & Loukas (2020) and evaluate the performance via local conductance: cut(S)/vol(S) (the smaller the better). We can see that the annealed training substantially improves the performance of Erdos across all problem types and all datasets, except for SF-295 in minimum cut, by utilizing a better-unsupervised training framework. Our method also outperforms greedy heuristics, classical algorithms such as MFA, CRD, MQI, and other learning based approaches such as RUNCSP, L1/L2 GNN. Besides, with annealed training,
the learned GNN outperforms MFA in most problems, with less number of iterations. It indicates that learning the shared patterns in graphs is helpful in solving CO problems. Comparing to integer solver, Gurobi is able to obtain good ratio on smaller graphs. On larger scale instances, our method can achieve comparable or even better results.
5.3 PARAMETER CHANGE DISTANCE
We want to stress that we use the same graph neural network as Erdos or RUNCSP, and the performance improvements come from our annealed training framework. In scatter plot 2, 3, we report the relative change for the parameters of GNN in MIS and MDS problems on the Twitter dataset. The relative change is calculated as ∥u−v∥2∥v∥2 , where v and u are vectors flattened from the parameters of GNN before and after training. For each method, we run 20 seeds. After introducing the annealed training, we see that both the ratio and the relative change of the parameters have a systematic increase, meaning the parameters of GNN can traverse to more distant regions and find better optima in annealed learning. We believe this effectively supports that annealed training prevents the training from being stuck at local optima.
6 ABLATION STUDY
We conduct an ablation study to answer two questions:
1. How does the penalty coefficient β in equation 2 influence the performance?
2. How does the annealing schedule influence the performance?
We conduct the experiments for the MDS problem on the small BA graphs from the previous section.
6.1 PENALTY COEFFICIENT
In the MDS problem, we know that the minimum penalty coefficient β needed to ensure the EBMs unbiased on the unweighted BA graphs is β = 1.0. To justify the importance to use the minimum penalty, we evaluate the performance for β = {0.0, 0.25, 0.5, 0.75, 1.0, 2.0, 3.0, 5.0}. For each β, we run experiments with five random seeds, and we report the result in Figure 4. We can see that the minimum penalty β = 1 has the best ratio. When the penalty coefficient β < 1, the EBMs
0.450 0.475 0.500 0.525 0.550 0.575 0.600 Relative change of GNN after training
0.965
0.970
0.975
0.980
0.985
0.990
Ra tio
Erdos Anneal
Figure 2: Distance in MIS 0.45 0.50 0.55 0.60 0.65 Relative change of GNN after training
0.92
0.93
0.94
0.95
0.96
Ra tio
Erdos Anneal
Figure 3: Distance in MDS 0.25 0.5 0.75 1.0 2.0 3.0 5.0 Penalty coefficient
0.80
0.85
0.90
0.95
1.00
Ra tio
Figure 4: Ablation for β
equation 3 are biased and have weights on infeasible solutions, thereby reducing the performance. When the penalty coefficient β > 1, the energy model equation 3 becomes less smooth and increases the difficulty in training. The penalty coefficient β = 1 gives the smoothest unbiased EBMs and has the best performance. We want to note that when β = 0, the loss function is non-informative, and the performance ratio can be as low as 0.3, so we do not plot its result in the figure.
6.2 ANNEALING SCHEDULE
We use the schedule equation 21 so as to make sure the potential change f/τk+1 − f/τk ≡ C is a constant for all steps k. In fact, with the schedule equation 21, the potential f/τk = (1+α(k−1))f/τ0 is a linear function w.r.t. k. Hence, we name it a linear schedule. It is possible to use other schedules, e.g. f/τk = (1+α(k− 1)) 1 2 f/τ0 and f/τk = (1+α(k− 1))3f/τ0, and we name them as concave and convex schedule. The visualization of the temperature schedule and the potential schedule is given in Figure 5. The initial temperature is also an important hyperparameter. We evaluate the initial temperature τ0 = {0.0, 0.1, 0.5, 1.0, 2.0, 5.0}. We report the results in Figure 5. We see that the performance is robust for whichever convex, linear, or concave schedule is used. The more important factor is the initial temperature τ0. The performance is reduced when τ0 is too small as the energy-based model equation 3 is not smooth enough, and the performance is robust when τ0 is large.
7 DISCUSSION
This paper proposes a generic unsupervised learning framework for combinatorial optimization problems and substantially improves the performance of the state-of-the-art method. One restriction of the current method is that it relies on condtional decoding to samle solutions from the learned variational distributions. For problems with more complex constraints, the decoded solutions might be infeasible. Hence, we believe better decoding strategies should be considered in future work.
The framework’s success relies on smoothing the loss function via critical penalty coefficients and annealed training as they effectively prevent the training from being stuck at local optima. The techniques introduced here can be potentially applied in a broader context beyond combinatorial optimization, especially in the weakly supervised learning setting like logic reasoning (Huang et al., 2021), program induction (Chen et al., 2020), question answering (Ren et al., 2021) where fine-grained supervisions are missing and required to be inferred.
A TYPES OF PROBLEMS SOLVABLE
The current method uses conditional decoding (Raghavan, 1988) to sample solutions from the learned variational distributions, which requires monotonic post-processing to make sure the final solution is feasible. For example, in the maximum independent set, the monotonic post-processing is removing nodes when conflict happens, in the minimum dominant set, the monotonic post-processing is adding nodes when a node has not been covered. Such a framework can be applied to CO problems that have trivial solutions, such as set covering problems, but can not be applied to CO problems with complicated constraints, such as vehicle routing problems.
B COMPLETE PROOF
B.1 MAXIMUM INDEPENDENT SET
In MIS, we use the energy function:
f(x) := − n∑
i=1
wixi + ∑
(i,j)∈E
βijxixj (22)
We are going to prove the following proposition. Proposition B.1. If βij ≥ min{wi, wj} for all (i, j) ∈ E, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 15 and has lower energy: f(x′) ≤ f(x).
Proof. For arbitrary x ∈ {0, 1}n, if x satisfies all constraints, we only need to let x′ = x. Else, there must exist an edge (i, j) ∈ E, such that xixj = 1. Denote k = argmin{wi, wj}, we define x′i = xi if i ̸= k and x′k = 0. In this case, we have:
f(x′)− f(x) = wk − ∑
j∈N(k)
βk,jxj ≤ wk(1− ∑
j∈N(k)
xj) ≤ 0 (23)
Thus we show f(x′) ≤ f(x). On the other side, consider a graph G = (V = {1, 2}, E = {(1, 2)}) and β12 < w1 < w2. Then the maximum independent set is {2}, which can be represented by x = (0, 1). However, in this case, let x′ = (1, 1) is feasible while f(x′) ≤ f(x). This means the condition we just derived is sharp.
B.2 MAXIMUM CLIQUE
A clique is a subset of the vertices S ⊆ V , such that every two distinct i, j ∈ S are adjacent: (i, j) ∈ E. The maximum problem is finding a clique S with the largest weight. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n
c(x) := − n∑
i=1
wixi, subject to xixj = 0,∀(i, j) ∈ Ec (24)
where Ec = {(i, j) ∈ V × V : i ̸= j, (i, j) /∈ E} is the set of complement edges on graph G. We define the corresponding energy function:
f(x) := − n∑
i=1
wixi + ∑
(i,j)∈Ec βijxixj (25)
We are going to prove the following proposition. Proposition B.2. If βij ≥ min{wi, wj} for all (i, j) ∈ Ec, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
Proof. For arbitrary x ∈ {0, 1}n, if x satisfies all constraints, we only need to let x′ = x. Else, there must exist an edge (i, j) ∈ Ec, such that xixj = 1. Denote k = argmin{wi, wj}, we define x′i = xi if i ̸= k and x′k = 0. In this case, we have:
f(x′)− f(x) = wk − ∑
j:(k,j)∈Ec βk,jxj ≤ wk(1− ∑ j:(k,j)∈Ec xj) ≤ 0 (26)
Thus we show f(x′) ≤ f(x). On the other side, consider a graph G = (V = {1, 2}, E = {}) and β12 < w1 < w2. Then the maximum clique is {2}, which can be represented by x = (0, 1). However, in this case, let x′ = (1, 1) is feasible while f(x′) ≤ f(x). This means the condition we just derived is sharp.
B.3 MINIMUM DOMINATE SET
In MIS, we use the energy function:
f(x) := n∑ i=1 wixi + n∑ i=1 βi(1− xi) ∏ j∈N(i) (1− xj) (27)
We are going to prove the following proposition. Proposition B.3. If βi ≥ mink{wk : k ∈ N(i) or k = i}, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
Proof. For arbitrary x ∈ {0, 1}n, if x satisfies all constraints, we only need to let x′ = x. Else, there must exist a node t ∈ V , such that xt = 0 and xj = 0 for all j ∈ N(t). Let k = argmin{wj : j ∈ N(t), or j = t}, we define x′i = xi if i ̸= k and x′k = 1. In this case, we have:
f(x′)− f(x) = wk − βt + ∑ i̸=t βi [ (1− x′i) ∏ j∈N(i) (1− x′j)− (1− xi) ∏ j∈N(i) (1− xj) ] ≤ 0 (28)
Thus, we prove f(x′) ≤ f(x). On the other side, consider a graph G = (V = {1}, E = {}) and β1 < w1. Then the maximum clique is {1}, which can be represented by x = (1). However, in this case, let x′ = (0) is feasible while f(x′) ≤ f(x). This means the condition we just derived is sharp.
B.4 MINIMUM CUT
In MIS, we use the energy function:
f(x) := ∑
(i,j)∈E
xi(1− xj)wij + β( n∑
i=1
dixi −D1)+ + β(D0 − n∑
i=1
dixi)+ (29)
We are going to prove the following proposition. Proposition B.4. If β ≥ maxi{ ∑ j∈N(i) |wi,j |}, then any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
C EXPERIMENT DETAILS
C.1 HARDWARE
All methods were run on Intel(R) Xeon(R) Gold 5215 CPU @ 2.50GHz, with 377GB of available RAM. The neural networks were executed on a single RTX6000 25GB graphics card. The code was executed on version 1.9.0 of PyTorch and version 1.7.2 of PyTorch Geometric.
C.2 GREEDY ALGORITHM
For MIS, greedy algorithm can be described in the following steps:
1. Pick the variable i has the smallest degree di in the candidate set.
2. Delete i and all its neighborhood N(i) = {j : (i, j) ∈ E} on the current graph. 3. Repeat step 1-2, until the current graph is empty.
For maximum clique, we first transform the graph into its complementary, then apply the greedy algorithm for MIS.
For MDS, greedy algorithm can be described in the following steps:
1. For every node i, initialize its state si = 1 to indicate whether it has not been covered. 2. For every node i, initialize its covering number ci = si + ∑
j∈N(i) sj to indicate how many nodes can be covered by selecting node i
3. Select the node i has the largest covering number ci.
4. Mark si = 0, and sj = 0 for j ∈ N(i). 5. Repeat step 3-4, until all si = 0.
C.3 DATASETS
For MIS and maximum clique, we follow Karalias & Loukas (2020) and use RB graphs (Xu et al., 2007). The construction of RB graphs has 4 parameters n, k, p. Following Karalias & Loukas (2020), for small graphs, we use n uniformly sampled from the integers [20, 25] and k uniformly sampled from [5, 12]; for large graphs, we use n uniformly sampled from the integers [40, 55] and k uniformly sampled from [20, 25].
For minimum dominant set, we follow Dai et al. (2020) to Barabasi-Albert networks (Barabási & Albert, 1999) with attachment 4.
C.4 COMPARISON TO SUPERVISED LEARNING
In order to compare our unsupervised results to supervised results, we provide evaluation results for the MDS problem on BA-4 graphs, using the supervised learning result in Dai et al. (2020). As in Table 5, we can see that annealed training significantly improves the performance of unsupervised learning and has a ratio very close to supervised learning. We also provide a comparison for supervised learning for evaluation on small RB graphs for MIS. As a source of labels, we use Gurobi to solve the maximum independent set problem. Due to the computational limitations, we set a time limit as 10 seconds and some of the instances are not solved to optimal. In this case as in Table 6, we observe that, with the proper training algorithm, the unsupervised learning can even beat the supervised learning.
C.5 EVALUATION ON VERY LARGE GRAPHS
We conduct extra experiments for MIS on large BA-4 Graphs following Dai et al. (2020). The model is trained on BA-4 graphs with size 1024-1100. For each larger size, we evaluate the methods on 100 graphs and report the mean, std, and average running time. We can see that the performance of Gurobi decreases with increasing the graph size and the learning based approaches. Another observation
is that the learning based approaches have much smaller running time, as Dai et al. (2020) has the conditional decoding implemented in cpp, while our conditional decoding is implemented in python. | 1. What is the focus and contribution of the paper on unsupervised learning with Graph Neural Networks?
2. What are the strengths and weaknesses of the proposed training procedure for combinatorial optimization problems?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the methodology, particularly in comparison to prior works like Karalias and Loukas [KL]?
5. How does the reviewer evaluate the effectiveness and stability of the proposed training procedure, and are there any suggestions for improvement? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new training procedure for the problem of unsupervised learning with Graph Neural Networks (GNN) for combinatorial optimization (CO) problems. The idea is to approximate the distribution over the solutions of the CO problem by a mean field distribution (see equation (8)) and to add an annealing factor. By carefully scheduling this annealing factor towards 0, the authors show empirically that their training improves the performances of a GNN on various problems: maximum clique or maximum independent set, minimum dominating set and minimum cut.
Strengths And Weaknesses
This paper is close to the work of Karalias and Loukas [KL] published at NeurIPs 2020. The authors make it clear in their paper that their contribution is the new training procedure and not the network architecture. They show that their procedure improve the performances with the same network as in [KL]. It seems to me that the only difference with regards to [KL] is the addition of an entropic term in the loss weighted with a parameter decreasing to 0. It would be nice to have more discussion and comparison with respect to [KL].
On page 6, the authors write: "The probabilistic method provides a good framework for unsupervised learning.However, optimizing the distribution is typically non-convex (Mezard & Montanari, 2009), making the training very unstable." I understand that their method is a contribution to make this optimization more stable. It would be nice to have experiments to substantiate this claim.
The result in this paper should probably be related to the literature about graphical models. Indeed, the distribution computed by the GNN (see equation (8)) is what is called a mean-field approximation for the true distribution. It is well-known that this approximation is limited and better approximations are known like Bethe approximation or expectation propagation or other variational methods... There are a lot of algorithms developed to solve CO problems in this setting that do not require any learning (see the survey of Wainwright and Jordan for example). These algorithms are not discussed in the present paper and should probably be compared to the approach proposed here.
More generally, the algorithm proposed here can be suboptimal at different stages: a) mean-field approximation; b) annealed training; c) conditional decoding. It would be interesting to quantify the impact of each of these steps. The ablation study done in section 6 is a good start concerning point b) above.
Clarity, Quality, Novelty And Reproducibility
The paper is generally well written. The code is provided but I did not try to run it.
On page 3, I do not understand the following claim: " Unlike supervised learning, where the loss function cross-entropy is convex forφ,Lτ(φ,I)in unsupervised learning could be highly non-convex, especially when τ is small."
I think that by dominate set, you mean dominating set: https://en.wikipedia.org/wiki/Dominating_set |
ICLR | Title
Annealed Training for Combinatorial Optimization on Graphs
Abstract
The hardness of combinatorial optimization (CO) problems hinders collecting solutions for supervised learning. However, learning neural networks for CO problems is notoriously difficult given the lack of labeled data as the training gets trapped easily at local optima. We propose a simple but effective annealed training framework for CO problems in this work. In particular, we transform CO problems into unbiased energy-based models (EBMs). We carefully selected the penalties terms to make the EBMs as smooth as possible. Then we train graph neural networks to approximate the EBMs and we introduce an annealed loss function to prevent the training from being stuck at local optima near the initialization. An experimental evaluation demonstrates that our annealed training framework obtains substantial improvements. In four types of CO problems, our method achieves performance substantially better than other unsupervised neural methods on both synthetic and real-world graphs.
1 INTRODUCTION
Combinatorial Optimization (CO) problems occur whenever there is a requirement to select the best option from a finite set of alternatives. They arise in various application areas, like business, medicine, and engineering (Paschos, 2013). Many CO problems are NP-complete (Karp, 1972; Garey & Johnson, 1979). Thus, excluding the use of exact algorithms to find the optimal solution (Padberg & Rinaldi, 1991; Wolsey & Nemhauser, 1999), different heuristic methods are employed to find suitable solutions in a reasonable time (Nemhauser et al., 1978; Dorigo et al., 2006; Hopfield & Tank, 1985; Kirkpatrick et al., 1983).
Often, instances from the same combinatorial optimization problem family are solved repeatedly, giving rise to the opportunity for learning to improve the heuristic (Bengio et al., 2020). Recently, learning algorithms for CO problems has shown much promise, including supervised (Khalil et al., 2016; Gasse et al., 2019; Li et al., 2018; Selsam et al., 2018; Nair et al., 2020), unsupervised (Karalias & Loukas, 2020; Toenshoff et al., 2021), and reinforcement learning (Dai et al., 2017; Sun et al., 2020; Yolcu & Póczos, 2019; Chen & Tian, 2019) The success of supervised learning relies on labeled data. However, solving a hard problem could take several hours or even days and is computationally prohibitive (Yehuda et al., 2020). Reinforcement learning, suffering from its larger state space and lack of full differentiability, tends to be more challenging and time-consuming to train.
Unsupervised learning usually transforms a CO problem into an optimization problem with a differentiable objective function f where the minima represent discrete solutions (Hopfield & Tank, 1985; Smith, 1999; Karalias & Loukas, 2020). Although this framework allows for efficient learning on large, unlabeled datasets, it is not without challenges. The objective function is typically highly non-convex (Mezard & Montanari, 2009). During learning, the model’s parameters can easily get trapped near a local optimum close to the initialization, never reaching the optimal set of parameters. This makes unsupervised learning for CO problems extremely hard.
To address this challenge, we propose an annealed training framework. In detail, given a CO problem, we consider a tempered EBM Pτ ∝ e−f(x)/τ , where the energy function f unifies constrained or unconstrained CO problems via the big-M method, that is to say, adding large penalties for violated constraints. We derive the minimum values of the penalty coefficient in different CO problems that give us the smoothest, unbiased energy-based models. We train a graph neural network (GNN) that
predicts a variational distribution Qϕ to approximate the energy-based model Pτ . During training, we set a high initial temperature τ and decrease it gradually during the training process. When τ is large, Pτ is close to a uniform distribution and only has shallow local optima, such that the parameter θ can traverse to distant regions. When τ decreases to values small enough, the unbiased model Pτ will concentrate on the optimal solutions to the original CO problem.
The experiments are evaluated on four NP-hard graph CO problems: MIS, maximum clique, MDS, and minimum cut. On both synthetic and real-world graphs, our annealed training framework achieves excellent performance compared to other unsupervised neural methods (Toenshoff et al., 2021; Karalias & Loukas, 2020), classical algorithms (Aarts et al., 2003; Bilbro et al., 1988), and integer solvers (Gurobi Optimization). The ablation study demonstrates the importance of selecting proper penalty coefficients and cooling schedules.
In summary, our work has the following contributions:
• We propose an annealed learning framework for generic unsupervised learning on combinatorial optimization problems. It is simple to implement yet effective in improving unsupervised learning across various problems on both synthetic and real graphs. • We conducted ablation studies that show: 1) annealed training enables the parameters to escape from local optima and traverse a longer distance, 2) selecting proper penalty coefficients is essential, 3) Using initial temperature large enough is critical.
2 ANNEALED TRAINING FOR COMBINATORIAL OPTIMIZATION
We want to learn a graph neural network Gθ to solve combinatorial optimization problems. Given an instance I , the Gθ generates a feature ϕ = Gθ(I) that determines a variational distribution Qϕ, from which we decode solutions. This section presents our annealed training framework for training Gθ. We first represent CO problems via an energy-based model. Then, we define the annealed loss function and explain how it helps in training. Finally, we give a toy example to help the understanding.
2.1 ENERGY BASED MODEL
We denote the set of combinatorial optimization (CO) problems as I. An instance I ∈ I is
I = (c(·), {ψi}mi=1) := argmin x∈{0,1}n c(x) s.t. ψi(x) = 0, i = 1, ...,m (1)
where c(·) is the objective function and ψi ∈ {0, 1} indicates if the i-th constraint is satisfied. We rewrite the constrained problem into an equivalent unconstrained form via the big M method:
argmin x∈{0,1}n
f (I)(x) := c(x) + m∑ i=1 βiψi(x), βi ≥ 0 (2)
If f (I) has its smallest values on optimal solutions for equation 1, we refer it to unbiased. The selection of penalty coefficient β plays an important role in the success of training, and we will discuss our choice of β detailedly in section 3. Using unbiased f (I) as an energy to measure the fitness of a solution x, solving CO problems is converted to finding low energy states. Accordingly, we can define the unbiased energy-based models (EBMs):
P (I)τ (x) ∝ e−f (I)(x)/τ (3)
where a state x is more likely being observed than another state x′ if it has a lower energy f (I)(x) < f (I)(x′). The EBMs naturally introduce a temperature τ to control the smoothness of the distribution. When f is unbiased, it has the following property:
Proposition 2.1. Assume f is unbiased, that’s to say, all minimizers of equation 2 are feasible solutions for equation 1. When the temperature τ increases to infinity, the energy-based model Pτ converges to a uniform distribution over the whole state space {0, 1}n. When the temperature τ decreases to zero, the energy-based model Pτ converges to a uniform distribution over the optimal solutions for equation 1.
The proposition above shows that the temperature τ in unbiased EBMs provides an interpolation between a flat uniform distribution and a sharp distribution concentrated on optimal solutions. This idea is the key to the success of simulated annealing (Kirkpatrick et al., 1983) in inference tasks. We will show that the temperature also helps in learning.
2.2 TEMPERED LOSS AND PARAMETERIZATION
We want to learn a graph neural networkGθ parameterized by θ. Given an instance I ∈ I ,Gθ(I) = ϕ generates a vector ϕ that determines a variational distributionQϕ to approximate the target distribution P (I) τ . We want to minimize the KL-divergence:
DKL(Qϕ||P (I)τ ) = ∫ Qϕ(x) ( logQϕ(x)− log e−f (I)(x)/τ∑
z∈{0,1}n e −f(I)(z)/τ
) dx (4)
= 1
τ Ex∼Qϕ(·)[f
(I)(x)]−H(Qϕ) + log ∑
z∈{0,1}n e−f (I)(z)/τ (5)
where H(p) = − ∑
x p(x) log p(x) denote the entropy of a distribution p. Removing the terms not involving ϕ and multiplying the constant τ , we define our annealed loss functions for ϕ and τ as:
Lτ (ϕ, I) = Ex∼Qϕ(·)[f (I)(x)]− τH(Qϕ) (6) Lτ (θ) = EI∼I [ Ex∼QGθ(I)(·)[f (I)(x)]− τH(QGθ(I)) ] (7)
In this work, we consider the variational distribution as a product distribution:
Qϕ(x) = n∏ i=1 (1− ϕi)1−xiϕxii (8)
where ϕ ∈ [0, 1]n. Such a form is popular in learning graphical neural networks for combinatorial optimization (Li et al., 2018; Dai et al., 2020; Karalias & Loukas, 2020) for its simplicity and effectiveness. However, directly applying it to unsupervised learning is challenging. Unlike supervised learning, where the loss function cross-entropy is convex for ϕ, Lτ (ϕ, I) in unsupervised learning could be highly non-convex, especially when τ is small.
2.3 ANNEALED TRAINING
To address the non-convexity in training, we employ annealed training. In particular, we use a large initial temperature τ0 to smooth the loss function and reduce τt gradually to zero during training. From proposition 2.1, it can be seen as a curriculum learning (Bengio et al., 2009) along the interpolation path from the easier uniform distribution to a more challenging target distribution.
Why is it helpful? We need a thorough investigation of the training procedure to answer this. Since the loss function equation 7 is the expectation over the set of instances I , we use a batch of instances I1, ..., IB to calculate the empirical loss L̂τ (θ) and perform stochastic gradient descent. It gives:
∇θL̂τ (θ) = B∑ i=1 ∇θLτ (Gθ(Ii), Ii) = B∑ i=1 ∂Gθ(Ii) ∂θ ∇ϕLτ (ϕ, Ii)|ϕ=Gθ(Ii) (9)
= EI∼I [ ∂Gθ(I)
∂θ ∇ϕLτ (ϕ, I)|ϕ=Gθ(I)
] + ξ (10)
≈ EI∼I [ ∂Gθ(I)
∂θ (∇ϕLτ (ϕ, I)|ϕ=Gθ(I) + ζ)
] (11)
In equation 10, we assume the batch introduces a stochastic term ξ in gradient w.r.t. θ. In equation 11, we incorporate the stochastic term into the gradient with respect to ϕ. When we assume ζ is a Gaussian noise, the inner term g = ∇ϕLτ (ϕ, I)|ϕ=Gθ(I) + ζ performs as a stochastic Langevin gradient with respect to ϕ Welling & Teh (2011). Since the training data is sampled from a fixed distribution I ∼ I , the scale of the noise ζ is also fixed. When Lτ (ϕ, i) is unsmooth, the randomness from ζ is negligible compared to the gradient ∇Lτ (ϕ, i) and can not bring ϕ out of local optima. By introducing the temperate τ , we smooth the loss function and reduce the magnitude of ∇Lτ (ϕ, i). During the training, the annealed training performs an implicit simulated annealing (Kirkpatrick et al., 1983) for ϕ.
2.4 A TOY EXAMPLE
We look at a toy example to have a more intuitive understanding of the annealed training. Consider a MIS problem on an undirected, unweighted graph G = (V,E), the corresponding energy function f(x) is:
f(x) = − n∑
i=1
xi + ∑
(i,j)∈E
xixj (12)
Its correctness can be justified by proposition 3.1. When we use the variational distribution Qϕ in equation 8, the first term in Lτ (ϕ, I) becomes to:
Ex∼Qϕ(·)[f (I)(x)] = − n∑ i=1 ϕi + ∑ (i,j)∈E ϕiϕj (13)
and accordingly, the gradient w.r.t. ϕ is: g = −1 + 2 ∑
j∈N(i)
ϕj + τ(log ϕi − log(1− ϕi)) + ζ (14)
where we assume ζ ∼ N (0, σ2) for a very small σ. When the temperature τ = 0, ϕi will collapse to either 0 or 1 very fast. When ϕi = 1, we have g = −1 + ζ, when ϕi = 0, we have g ≥ 1 + ζ. Since σ is small, the noise ζ can hardly have an effect, and ϕ will be stuck at local optima, i.e., any maximal independent set such as figure. 1 (a). In figure. 1, we simulate the input (a) at decreasing temperatures τ = 1.0, 0.5, 0.1. When τ is large, all ϕi will be pushed to a neutral state, e.g., in the figure. 1 (b) where the difference of ϕi is at scale 10−3. In this case, the noise ζ can significantly affect the sign of the gradient g and lead to phase transitions. By gradually decreasing the temperature, ϕ collapses to the global optimum and provides correct guidance to update θ.
3 CASE STUDY
We consider four combinatorial optimization problems on graphs in this work: maximum independent set (MIS), maximum clique, minimum dominate set (MDS), and minimum cut. An undirected weighted graph can represent all problems G = (V,E,w), where V = {1, ..., n} is the set of nodes, E is the set of edges, and w is the weight function. For any i ∈ V , wi = w(i) is the weight of the node. For any (i, j) ∈ E, wij = w(i, j) is the weight of the edge. For each problem, we derive the minimum value of the penalty coefficient β such that the energy function has the lowest energy at optimal solutions, and we use the derived values to design the loss functions in our experiments.
3.1 MAXIMUM INDEPENDENT SET AND MAXIMUM CLIQUE
An independent set is a subset of the vertices S ⊆ V , such that for arbitrary i, j ∈ S, (i, j) /∈ E. The MIS problem is finding an independent set S with the largest weight. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n
c(x) := − n∑
i=1
wixi, subject to xixj = 0,∀(i, j) ∈ E (15)
We define the corresponding energy function:
f(x) := − n∑
i=1
wixi + ∑
(i,j)∈E
βijxixj (16)
Proposition 3.1. If βij ≥ max{wi, wj} for all (i, j) ∈ E, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 15 and has lower energy: f(x′) ≤ f(x).
Maximum Clique is equivalent to MIS on the complementary graph. Since a GNN is unaware of this connection, studying maximum clique for learning based approaches is still fruitful. The definition of maximum clique is in Appendix B.2 and show how to properly select the penalty coefficient here. Proposition 3.2. If βij ≥ max{wi, wj} for all (i, j) ∈ Ec, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
3.2 MINIMUM DOMINATE SET
A dominate set is a subset of the vertices S ⊆ V , where for any v ∈ V , there exists u ∈ S such that (u, v) ∈ E. The MDS problem is finding a dominate set S with the minimum weight. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n c(x) := n∑ i=1 wixi, subject to (1− xi) ∏ j∈N(i) (1− xj) = 0,∀i ∈ V (17)
We define the corresponding energy function:
f(x) := − n∑
i=1
wixi + n∑ i=1 βi(1− xi) ∏ j∈N(i) (1− xj) (18)
Proposition 3.3. If βi ≥ mink{wk : k ∈ N(i) or k = i}, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
3.3 MINIMUM CUT
A partition consists of two subsets: S and V \S. The cut cut(S) is defined as the number of weights between S and V \S. The volume of S is defined as vol(S) = ∑ i∈S di, where di is the degree of node i. The minimum cut problem is to find a S having the minimum cut, subject to the degree of S is between [D0, D1]. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n
c(x) := ∑
(i,j)∈E
xi(1− xj)wij , subject to D0 ≤ n∑
i=1
dixi ≤ D1 (19)
We define the corresponding energy function:
f(x) := ∑
(i,j)∈E
xi(1− xj)wij + β( n∑
i=1
dixi −D1)+ + β(D0 − n∑
i=1
dixi)+ (20)
Proposition 3.4. If β ≥ maxi{ ∑
j∈N(i) |wi,j |}, then any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
4 RELATED WORK
Recently, there has been a surge of interest in learning algorithms for CO problems (Bengio et al., 2020). Supervised learning is widely used. Numerous works have combined GNNs with search procedures to solve classical CO problems, such as the traveling salesman problem (Vinyals et al., 2015; Joshi et al., 2019; Prates et al., 2019), graph matching (Wang et al., 2019; 2020), quadratic assignments (Nowak et al., 2017), graph coloring (Lemos et al., 2019), and MIS (Li et al., 2018).
Erdos 0.805 ± 0.052 0.156 0.781 ± 0.644 2.158 0.986 ± 0.056 0.010 0.975 ± 0.033 0.020 Our’s 0.898 ± 0.030 0.165 0.848 ± 0.529 2.045 0.997 ± 0.020 0.010 0.986 ± 0.012 0.020 RUNCSP 0.823 ± 0.145 1.936 0.587 ± 0.312 7.282 0.912 ± 0.101 0.254 0.845 ± 0.184 4.429 RUNCSP(A)0.851 ± 0.158 1.942 0.629 ± 0.451 7.268 0.923 ± 0.188 0.281 0.877 ± 0.209 4.438
Greedy 0.761 ± 0.058 0.002 0.720 ± 0.046 0.009 0.996 ± 0.017 0.001 0.957 ± 0.037 0.006 MFA 0.784 ± 0.058 0.042 0.747 ± 0.056 0.637 0.998 ± 0.007 0.002 0.994 ± 0.010 0.003 G(0.5s) 0.864 ± 0.169 0.723 0.632 ± 0.176 1.199 1.000 ± 0.000 0.029 0.950 ± 0.191 0.441 G(1.0s) 0.972 ± 0.065 1.063 0.635 ± 0.176 1.686 1.000 ± 0.000 0.029 1.000 ± 0.000 0.462
Erdos 0.813 ± 0.067 0.279 0.735 ± 0.084 0.622 0.960 ± 0.019 0.139 0.822 ± 0.085 0.222 Our’s 0.901 ± 0.055 0.262 0.831 ± 0.078 0.594 0.988 ± 0.011 0.143 0.920 ± 0.083 0.213 RUNCSP 0.821 ± 0.131 2.045 0.574 ± 0.299 7.332 0.887 ± 0.134 0.164 0.832 ± 0.153 4.373 RUNCSP(A)0.860 ± 0.189 2.101 0.609 ± 0.381 7.294 0.895 ± 0.162 0.188 0.877 ± 0.221 4.442
Greedy 0.764 ± 0.064 0.002 0.727 ± 0.038 0.014 0.999 ± 0.002 0.001 0.959 ± 0.034 0.001 MFA 0.804 ± 0.064 0.144 0.710 ± 0.045 0.147 1.000 ± 0.000 0.005 0.994 ± 0.010 0.010 G(0.5s) 0.948 ± 0.076 0.599 0.812 ± 0.087 0.617 0.997 ± 0.035 0.061 0.976 ± 0.065 0.382 G(1.0s) 0.984 ± 0.042 0.705 0.847 ± 0.101 1.077 0.999 ± 0.015 0.062 0.997 ± 0.029 0.464
Another fruitful direction is combining learning with existing solvers. For example, in the branch and bound algorithm, He et al. (2014); Khalil et al. (2016); Gasse et al. (2019); Nair et al. (2020) learn the variable selection policy by imitating the decision of oracle or rules designed by human experts. However, the success of supervised learning relies on large labeled datasets, which is hard to efficiently generate in an unbiased and representative manner (Yehuda et al., 2020).
Many works, therefore, choose to use reinforcement learning instead. Dai et al. (2017) combines Q-learning with greedy algorithms to solve CO problems on graphs. Q-learning is also used in (Bai et al., 2020) for maximum subgraph problem. Sun et al. (2020) uses an evolutionary strategy to learn variable selection in the branch and bound algorithm. Yolcu & Póczos (2019) employs REINFORCE algorithm to learn local heuristics for SAT problems. Chen & Tian (2019) uses actor-critic learning to learn a local rewriting algorithm. Despite being a promising approach that avoids using labeled data, reinforcement learning is typically sample inefficient and notoriously unstable to train due to poor gradient estimations, correlations present in the sequence of observations, and hard explorations (Espeholt et al., 2018; Tang et al., 2017).
Works in unsupervised learning show promising results. In initial attempts, Hopfield & Tank (1985); Van den Bout & Miller (1989); Ramanujam & Sadayappan (1995) transform CO problems into optimization problem of neural networks with differentible objective functions. More recently, a series of deep learning approaches emerges. Yao et al. (2019) train GNN for the max-cut problem by optimizing a relaxation of the cut objective, Toenshoff et al. (2021) trains RNN for maximum-SAT via maximizing the probability of its prediction. Karalias & Loukas (2020) use a GNN to predict the distribution and the graphical neural network to minimize the expectation of the objective function on this distribution. The probabilistic method provides a good framework for unsupervised learning. However, optimizing the distribution is typically non-convex (Mezard & Montanari, 2009), making the training very unstable.
5 EXPERIMENTS
5.1 SETTINGS
Dataset: For MIS and maximum clique, problems on both real and random graphs are easy (Dai et al., 2020). Hence, we follow Karalias & Loukas (2020) to use RB graphs (Xu et al., 2007), designed to
generate hard instances. We use a small dataset containing graphs with 200-300 nodes and a large dataset containing graphs with 800-1200 nodes. For MDS, we follow Dai et al. (2020) to use BA graphs with 4 attaching edges (Barabási & Albert, 1999). We also use a small dataset containing graphs with 200-300 nodes and a large dataset containing graphs with 800-1200 nodes. We also use real graph datasets Collab, Twitter from TUdataset (Morris et al., 2020). For minimum cut, we follow Karalias & Loukas (2020) and use real graph datasets including SF-295 (Yan et al., 2008), Facebook (Traud et al., 2012), and Twitter (Morris et al., 2020). For RB graphs, the optimal solution is known during the graph construction. For other problems, we generate the "ground truth" solution through Gurobi 9.5 (Gurobi Optimization) with a time limit of 3600 seconds. For synthetic datasets, we generate 2000 graphs for training, 500 for validation, and 500 for testing. For real datasets, we follow Karalias & Loukas (2020) and use a 60-20-20 split for training, validating, and testing.
Implementation: We train our graph neural network on training data with 500 epochs. We choose the penalty coefficient β at the critical point for each problem type. We use the schedule:
τk = τ0/(1 + αk) (21)
where τ0 is chosen as the Lipschitz constant of the energy function equation 2 and α is selected to make sure the final temperature τ500 = 0.001. Since the contribution of this work focuses on the training framework, the architecture of the graph neural network is not important. Hence, we provide results from applying annealing training to Karalias & Loukas (2020) and Toenshoff et al. (2021) for fair comparison, denoted as "Annealed Erdos" and "Annealed RUNCSP" respectively. In particular, the architecture from Karalias & Loukas (2020) consists of multiple layers of the Graph Isomorphism Network (Xu et al., 2018) and a graph Attention (Veličković et al., 2017). More details refer to Karalias & Loukas (2020). Moreover, the architecture from Toenshoff et al. (2021) creates a network that approximates a Constraint Language (Dechter et al., 2003) using a message-passing GNN using an LSTM for internal updates (Hochreiter & Schmidhuber, 1997). With both of these GNN architectures, after obtaining the variational distributionQϕ equation 8, we generate the solution via conditional decoding (Raghavan, 1988).
Baselines: We compare our method with unsupervised neural methods, classical algorithms, and integer programming solvers. To establish a strong baseline for neural methods, we use the Erdos GNN (Karalias & Loukas, 2020), the state-of-the-art unsupervised learning framework for combinatorial optimization problems. For maximum clique and MIS, we transform the problem to constraint programming and compare them with RUNCSP (Toenshoff et al., 2021). We also implement the annealed training version of RUNCSP and denote it as RUNCSP(A). We followed Karalias & Loukas (2020) for minimum cut and built the L1 GNN and L2 GNN. In classical algorithms, we consider greedy algorithms and mean field annealing (MFA) (Bilbro et al., 1988). MFA also runs mean field approximation (ANDERSON, 1988) to predict a variational distribution as our method. The difference is that the update rule of MFA is determined after seeing the current graph, while the parameters in GNN are trained on the whole dataset. Also, in minimum cut, we follow (Karalias & Loukas, 2020) to compare with well-known and advanced algorithms: Pageran-Nibble (Andersen et al., 2006), Capacity Releasing Diffusion (CRD) (Wang et al., 2017), Max-flow Quotient-cut Improvement (Lang & Rao, 2004), and Simple-Local (Veldt et al., 2016). For integer programming solver, we use Gurobi 9.0 (Gurobi Optimization) and set different time limits t. We denote G(ts) as Gurobi 9.0 (t s). where t is the solving time limit. One needs to notice that Gurobi has proprocessing before solving, so the actual running time can be longer than the given time limit.
5.2 RESULTS
We report the results for MIS in Table 1, the results for maximum clique in Table 2, for MDS in Table 3, for minimum cut in Table 4. More results for comparison with supervised learning methods and evaluation on very large graphs are provided in C.4 and C.5. In the MIS and maximum clique, we report the ratios computed by dividing the optimal value by the obtained value (the larger, the better). In the MDS, we report the ratios computed from the obtained value by dividing the optimal value (the larger, the better). In minimum cut, we follow Karalias & Loukas (2020) and evaluate the performance via local conductance: cut(S)/vol(S) (the smaller the better). We can see that the annealed training substantially improves the performance of Erdos across all problem types and all datasets, except for SF-295 in minimum cut, by utilizing a better-unsupervised training framework. Our method also outperforms greedy heuristics, classical algorithms such as MFA, CRD, MQI, and other learning based approaches such as RUNCSP, L1/L2 GNN. Besides, with annealed training,
the learned GNN outperforms MFA in most problems, with less number of iterations. It indicates that learning the shared patterns in graphs is helpful in solving CO problems. Comparing to integer solver, Gurobi is able to obtain good ratio on smaller graphs. On larger scale instances, our method can achieve comparable or even better results.
5.3 PARAMETER CHANGE DISTANCE
We want to stress that we use the same graph neural network as Erdos or RUNCSP, and the performance improvements come from our annealed training framework. In scatter plot 2, 3, we report the relative change for the parameters of GNN in MIS and MDS problems on the Twitter dataset. The relative change is calculated as ∥u−v∥2∥v∥2 , where v and u are vectors flattened from the parameters of GNN before and after training. For each method, we run 20 seeds. After introducing the annealed training, we see that both the ratio and the relative change of the parameters have a systematic increase, meaning the parameters of GNN can traverse to more distant regions and find better optima in annealed learning. We believe this effectively supports that annealed training prevents the training from being stuck at local optima.
6 ABLATION STUDY
We conduct an ablation study to answer two questions:
1. How does the penalty coefficient β in equation 2 influence the performance?
2. How does the annealing schedule influence the performance?
We conduct the experiments for the MDS problem on the small BA graphs from the previous section.
6.1 PENALTY COEFFICIENT
In the MDS problem, we know that the minimum penalty coefficient β needed to ensure the EBMs unbiased on the unweighted BA graphs is β = 1.0. To justify the importance to use the minimum penalty, we evaluate the performance for β = {0.0, 0.25, 0.5, 0.75, 1.0, 2.0, 3.0, 5.0}. For each β, we run experiments with five random seeds, and we report the result in Figure 4. We can see that the minimum penalty β = 1 has the best ratio. When the penalty coefficient β < 1, the EBMs
0.450 0.475 0.500 0.525 0.550 0.575 0.600 Relative change of GNN after training
0.965
0.970
0.975
0.980
0.985
0.990
Ra tio
Erdos Anneal
Figure 2: Distance in MIS 0.45 0.50 0.55 0.60 0.65 Relative change of GNN after training
0.92
0.93
0.94
0.95
0.96
Ra tio
Erdos Anneal
Figure 3: Distance in MDS 0.25 0.5 0.75 1.0 2.0 3.0 5.0 Penalty coefficient
0.80
0.85
0.90
0.95
1.00
Ra tio
Figure 4: Ablation for β
equation 3 are biased and have weights on infeasible solutions, thereby reducing the performance. When the penalty coefficient β > 1, the energy model equation 3 becomes less smooth and increases the difficulty in training. The penalty coefficient β = 1 gives the smoothest unbiased EBMs and has the best performance. We want to note that when β = 0, the loss function is non-informative, and the performance ratio can be as low as 0.3, so we do not plot its result in the figure.
6.2 ANNEALING SCHEDULE
We use the schedule equation 21 so as to make sure the potential change f/τk+1 − f/τk ≡ C is a constant for all steps k. In fact, with the schedule equation 21, the potential f/τk = (1+α(k−1))f/τ0 is a linear function w.r.t. k. Hence, we name it a linear schedule. It is possible to use other schedules, e.g. f/τk = (1+α(k− 1)) 1 2 f/τ0 and f/τk = (1+α(k− 1))3f/τ0, and we name them as concave and convex schedule. The visualization of the temperature schedule and the potential schedule is given in Figure 5. The initial temperature is also an important hyperparameter. We evaluate the initial temperature τ0 = {0.0, 0.1, 0.5, 1.0, 2.0, 5.0}. We report the results in Figure 5. We see that the performance is robust for whichever convex, linear, or concave schedule is used. The more important factor is the initial temperature τ0. The performance is reduced when τ0 is too small as the energy-based model equation 3 is not smooth enough, and the performance is robust when τ0 is large.
7 DISCUSSION
This paper proposes a generic unsupervised learning framework for combinatorial optimization problems and substantially improves the performance of the state-of-the-art method. One restriction of the current method is that it relies on condtional decoding to samle solutions from the learned variational distributions. For problems with more complex constraints, the decoded solutions might be infeasible. Hence, we believe better decoding strategies should be considered in future work.
The framework’s success relies on smoothing the loss function via critical penalty coefficients and annealed training as they effectively prevent the training from being stuck at local optima. The techniques introduced here can be potentially applied in a broader context beyond combinatorial optimization, especially in the weakly supervised learning setting like logic reasoning (Huang et al., 2021), program induction (Chen et al., 2020), question answering (Ren et al., 2021) where fine-grained supervisions are missing and required to be inferred.
A TYPES OF PROBLEMS SOLVABLE
The current method uses conditional decoding (Raghavan, 1988) to sample solutions from the learned variational distributions, which requires monotonic post-processing to make sure the final solution is feasible. For example, in the maximum independent set, the monotonic post-processing is removing nodes when conflict happens, in the minimum dominant set, the monotonic post-processing is adding nodes when a node has not been covered. Such a framework can be applied to CO problems that have trivial solutions, such as set covering problems, but can not be applied to CO problems with complicated constraints, such as vehicle routing problems.
B COMPLETE PROOF
B.1 MAXIMUM INDEPENDENT SET
In MIS, we use the energy function:
f(x) := − n∑
i=1
wixi + ∑
(i,j)∈E
βijxixj (22)
We are going to prove the following proposition. Proposition B.1. If βij ≥ min{wi, wj} for all (i, j) ∈ E, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 15 and has lower energy: f(x′) ≤ f(x).
Proof. For arbitrary x ∈ {0, 1}n, if x satisfies all constraints, we only need to let x′ = x. Else, there must exist an edge (i, j) ∈ E, such that xixj = 1. Denote k = argmin{wi, wj}, we define x′i = xi if i ̸= k and x′k = 0. In this case, we have:
f(x′)− f(x) = wk − ∑
j∈N(k)
βk,jxj ≤ wk(1− ∑
j∈N(k)
xj) ≤ 0 (23)
Thus we show f(x′) ≤ f(x). On the other side, consider a graph G = (V = {1, 2}, E = {(1, 2)}) and β12 < w1 < w2. Then the maximum independent set is {2}, which can be represented by x = (0, 1). However, in this case, let x′ = (1, 1) is feasible while f(x′) ≤ f(x). This means the condition we just derived is sharp.
B.2 MAXIMUM CLIQUE
A clique is a subset of the vertices S ⊆ V , such that every two distinct i, j ∈ S are adjacent: (i, j) ∈ E. The maximum problem is finding a clique S with the largest weight. Rigorously, if we denote xi = 1 to indicate i ∈ S and xi = 0 to indicate i /∈ S, the problem can be formulated as:
argmin x∈{0,1}n
c(x) := − n∑
i=1
wixi, subject to xixj = 0,∀(i, j) ∈ Ec (24)
where Ec = {(i, j) ∈ V × V : i ̸= j, (i, j) /∈ E} is the set of complement edges on graph G. We define the corresponding energy function:
f(x) := − n∑
i=1
wixi + ∑
(i,j)∈Ec βijxixj (25)
We are going to prove the following proposition. Proposition B.2. If βij ≥ min{wi, wj} for all (i, j) ∈ Ec, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
Proof. For arbitrary x ∈ {0, 1}n, if x satisfies all constraints, we only need to let x′ = x. Else, there must exist an edge (i, j) ∈ Ec, such that xixj = 1. Denote k = argmin{wi, wj}, we define x′i = xi if i ̸= k and x′k = 0. In this case, we have:
f(x′)− f(x) = wk − ∑
j:(k,j)∈Ec βk,jxj ≤ wk(1− ∑ j:(k,j)∈Ec xj) ≤ 0 (26)
Thus we show f(x′) ≤ f(x). On the other side, consider a graph G = (V = {1, 2}, E = {}) and β12 < w1 < w2. Then the maximum clique is {2}, which can be represented by x = (0, 1). However, in this case, let x′ = (1, 1) is feasible while f(x′) ≤ f(x). This means the condition we just derived is sharp.
B.3 MINIMUM DOMINATE SET
In MIS, we use the energy function:
f(x) := n∑ i=1 wixi + n∑ i=1 βi(1− xi) ∏ j∈N(i) (1− xj) (27)
We are going to prove the following proposition. Proposition B.3. If βi ≥ mink{wk : k ∈ N(i) or k = i}, then for any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
Proof. For arbitrary x ∈ {0, 1}n, if x satisfies all constraints, we only need to let x′ = x. Else, there must exist a node t ∈ V , such that xt = 0 and xj = 0 for all j ∈ N(t). Let k = argmin{wj : j ∈ N(t), or j = t}, we define x′i = xi if i ̸= k and x′k = 1. In this case, we have:
f(x′)− f(x) = wk − βt + ∑ i̸=t βi [ (1− x′i) ∏ j∈N(i) (1− x′j)− (1− xi) ∏ j∈N(i) (1− xj) ] ≤ 0 (28)
Thus, we prove f(x′) ≤ f(x). On the other side, consider a graph G = (V = {1}, E = {}) and β1 < w1. Then the maximum clique is {1}, which can be represented by x = (1). However, in this case, let x′ = (0) is feasible while f(x′) ≤ f(x). This means the condition we just derived is sharp.
B.4 MINIMUM CUT
In MIS, we use the energy function:
f(x) := ∑
(i,j)∈E
xi(1− xj)wij + β( n∑
i=1
dixi −D1)+ + β(D0 − n∑
i=1
dixi)+ (29)
We are going to prove the following proposition. Proposition B.4. If β ≥ maxi{ ∑ j∈N(i) |wi,j |}, then any x ∈ {0, 1}n, there exists a x′ ∈ {0, 1}n that satisfies the constraints in equation 24 and has lower energy: f(x′) ≤ f(x).
C EXPERIMENT DETAILS
C.1 HARDWARE
All methods were run on Intel(R) Xeon(R) Gold 5215 CPU @ 2.50GHz, with 377GB of available RAM. The neural networks were executed on a single RTX6000 25GB graphics card. The code was executed on version 1.9.0 of PyTorch and version 1.7.2 of PyTorch Geometric.
C.2 GREEDY ALGORITHM
For MIS, greedy algorithm can be described in the following steps:
1. Pick the variable i has the smallest degree di in the candidate set.
2. Delete i and all its neighborhood N(i) = {j : (i, j) ∈ E} on the current graph. 3. Repeat step 1-2, until the current graph is empty.
For maximum clique, we first transform the graph into its complementary, then apply the greedy algorithm for MIS.
For MDS, greedy algorithm can be described in the following steps:
1. For every node i, initialize its state si = 1 to indicate whether it has not been covered. 2. For every node i, initialize its covering number ci = si + ∑
j∈N(i) sj to indicate how many nodes can be covered by selecting node i
3. Select the node i has the largest covering number ci.
4. Mark si = 0, and sj = 0 for j ∈ N(i). 5. Repeat step 3-4, until all si = 0.
C.3 DATASETS
For MIS and maximum clique, we follow Karalias & Loukas (2020) and use RB graphs (Xu et al., 2007). The construction of RB graphs has 4 parameters n, k, p. Following Karalias & Loukas (2020), for small graphs, we use n uniformly sampled from the integers [20, 25] and k uniformly sampled from [5, 12]; for large graphs, we use n uniformly sampled from the integers [40, 55] and k uniformly sampled from [20, 25].
For minimum dominant set, we follow Dai et al. (2020) to Barabasi-Albert networks (Barabási & Albert, 1999) with attachment 4.
C.4 COMPARISON TO SUPERVISED LEARNING
In order to compare our unsupervised results to supervised results, we provide evaluation results for the MDS problem on BA-4 graphs, using the supervised learning result in Dai et al. (2020). As in Table 5, we can see that annealed training significantly improves the performance of unsupervised learning and has a ratio very close to supervised learning. We also provide a comparison for supervised learning for evaluation on small RB graphs for MIS. As a source of labels, we use Gurobi to solve the maximum independent set problem. Due to the computational limitations, we set a time limit as 10 seconds and some of the instances are not solved to optimal. In this case as in Table 6, we observe that, with the proper training algorithm, the unsupervised learning can even beat the supervised learning.
C.5 EVALUATION ON VERY LARGE GRAPHS
We conduct extra experiments for MIS on large BA-4 Graphs following Dai et al. (2020). The model is trained on BA-4 graphs with size 1024-1100. For each larger size, we evaluate the methods on 100 graphs and report the mean, std, and average running time. We can see that the performance of Gurobi decreases with increasing the graph size and the learning based approaches. Another observation
is that the learning based approaches have much smaller running time, as Dai et al. (2020) has the conditional decoding implemented in cpp, while our conditional decoding is implemented in python. | 1. What is the focus and contribution of the paper regarding unsupervised learning on graph optimization problems?
2. What are the strengths of the proposed annealed training procedure, particularly in its empirical results and organization?
3. What are the weaknesses of the paper, such as concerns regarding scalability, principled derivation of the loss objective, and graph net architecture effects?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes an annealed training procedure for unsupervised learning on graph optimization problems, including maximum independent set, maximum clique, minimum dominating set, and minimum cut. The motivation for this procedure is set up so that the minimum one model still correspond to the minimizer of the original discrete optimization problem.
The annealed training procedure operates on an energy-based model—the objective of this model is the sum of the exponential of negative energy value (defined on the function value of the problem). Then, the annealed training procedure performs an implicit simulated annealing on the objective of the energy-based model. A toy example is provided to give an intuitive understanding of the annealed training on the maximum independent set.
The experiments compare the annealed training procedure with Erdos (which is a recent unsupervised learning framework for combinatorial optimization on graphs). The results demonstrate that the proposed procedure outperforms the found solution of Erdos by about 10% on a number of graphs.
Strengths And Weaknesses
The strength of this paper seems to be in the empirical results; this is shown by the comparison between this paper’s method and Erdos (a prior method). Besides, the paper is well-written and well-organized. The toy example in section 2.4 is helpful for illustrating an intuitive understanding.
The weakness of this paper is about robustness checks of the proposed approach: For example, how does the annealed training procedure scale to large graphs? Is there a principled derivation of the loss objective (like in boosting)? Besides, how would the graph net architecture affect the optimization method?
Clarity, Quality, Novelty And Reproducibility
Simulated annealing is a classical idea in combinatorial optimization. This paper applies simulated appealing for unsupervised learning on graphs; this is a reasonable approach that I think would benefit the community with more studies.
The supplementary materials include the code for the experiments. There is no documentation although the code is well-organized, so I think the experimental result of this work could be reproduced following the provided code. |
ICLR | Title
Generating Sequences by Learning to Self-Correct
Abstract
Sequence generation applications require satisfying semantic constraints, such as ensuring that programs are correct, using certain keywords, or avoiding undesirable content. Language models, whether fine-tuned or prompted with few-shot demonstrations, frequently violate these constraints, and lack a mechanism to iteratively revise their outputs. Moreover, some powerful language models are of extreme scale or inaccessible, making it inefficient, if not infeasible, to update their parameters for task-specific adaptation. We present SELF-CORRECTION, an approach that decouples an imperfect base generator (an off-the-shelf language model or supervised sequence-to-sequence model) from a separate corrector that learns to iteratively correct imperfect generations. To train the corrector, we propose an online training procedure that can use either scalar or natural language feedback on intermediate imperfect generations. We show that SELFCORRECTION improves upon the base generator in three diverse generation tasks– mathematical program synthesis, lexically-constrained generation, and toxicity control– even when the corrector is much smaller than the base generator.
1 INTRODUCTION
The standard practice for natural language generation tasks is inherently single-pass: applying a decoding procedure to either a few-shot prompted language model or one tuned for a given task, then considering the generation as “finished” (e.g. Radford et al. (2019); Brown et al. (2020); Chen et al. (2021)). Powerful generation models often meet most of the task requirements, yet miss a few (e.g., omitting a subset of keywords), or generate incorrect hypotheses that nevertheless provide useful structure (e.g., a correct problem solving strategy with a missing step). However, after generating even a slightly sub-optimal sequence, the single-pass paradigm requires models to “start from scratch”, effectively discarding work already done. A more natural, intuitive approach is leveraging the generation as a useful starting point to refine into a higher quality output.
To formalize this intuition, we introduce Self-Correction for Sequence Generation. Figure 1 demonstrates its central principle: a generation model is re-framed as a base generator, which produces a reasonable initial hypothesis but does not need to solve the task in one pass, and a second module–the corrector–trained to make up the difference between the hypothesis and an optimal solution. Neither the generator nor the corrector must solve the full task in one pass, and the corrector can be applied multiple times to iteratively improve the output (§3.6). We propose a simple, general procedure for training the corrector (Figure 2) by pairing generator outputs with carefully selected targets. The result is a system which self-corrects, producing outputs through multiple generation passes and breaking the task into steps that can be solved by dedicated and efficient sub-systems.
Self-Correction builds on past work for correction in the code and text (e.g. Yasunaga et al. (2021); Faltings et al. (2021)) domains, but provides a unified formalism with minimal assumptions about
∗First authors, contributed equally. †Second authors, contributed equally.
data and feedback, which applies generally to diverse tasks. A corrector model improves the base generator on 3 such tasks in our experiments: mathematical program synthesis (§3.1), lexically constrained generation (§3.2), and toxicity reduction (§3.3). The trained corrector model even transfers to a larger generator with similar performance to training from scratch (§3.4). Finally, we explore introducing a third module to the Self-Correction system (§3.5)–explicitly using natural language feedback to guide corrections–with promising results. Self-Correction is an exciting path to build on the generations of strong models, with efficient, effective, and transferable corrector networks.
2 SELF-CORRECTING SEQUENCE GENERATORS
A typical autoregressive text generator (e.g. GPT-3 (Brown et al., 2020)) maps an input prompt to a distribution over outputs using a single parameterized module (e.g. a large transformer), p0(y|x). We explore an alternative that decomposes into two modules, a base generator, and a corrector,
p(y|x) = ∑ y0 p0(y0|x)︸ ︷︷ ︸ generator pθ(y|y0, x)︸ ︷︷ ︸ corrector
(1)
where the generator provides an initial hypothesis that is refined by the corrector. In practice, the corrector can be applied multiple times, p(yT |x) = ∑ y0 ∑ y1 · · · ∑ yT−1 p0(y0|x) ∏
t pθ(yt+1|yt, x). Since a model of this form can both generate and correct its generations, we call it a Self-Corrector.
Self-correctors have several unique properties compared to typical generators. First, a self-corrector decouples generation and correction, allowing us to freely parameterize each module – for instance, by prompting a single language model or using two different language models. In this paper, we develop a framework to train a separate corrector model (§2.1). We find that the resulting selfcorrector improves upon the generator alone (§3), even when the corrector is much smaller (§3.4).
Second, since the generator and the corrector are separated, we can keep the generator as a generalpurpose language model and train the corrector with different objectives for different task requirements. In §2.1, we propose a training algorithm for the corrector that is dedicated to improving generations, where the improvement can be in any aspect, measured by scalar values.
Third, the corrector can receive explicit feedback about intermediate generations to guide subsequent generations. Formally, p(y|x) = ∑ y0
p0(y0|x)pθ(y|y0, x, f(y0)), where f is the feedback. The feedback can be of many forms, e.g. a sentence, a compiler trace, etc. In contrast, a typical generator that generates in a single pass does not leverage feedback on its own generation. In this paper, we show that the corrector can learn to exploit explicit natural language feedback to achieve better performance (§3.5). Next, we describe our training framework of the corrector.
2.1 LEARNING A CORRECTOR
Our goal is to have the generator generate an initial hypothesis, then improve the hypothesis with the corrector (Eq. 1). We train the corrector to improve the quality of a hypothesis, while staying as close as possible to the original hypothesis. Here, quality is measured with a scalar value function v(y) which is accessible at training time (e.g. 0/1 indicator of program correctness, a toxicity score).
Algorithm 1 Self-corrective learning input Generator p0, corrector pθ , prompts X , value v(·), feedback f(·)
Initialize datapool D by sampling from p0 ▷ Initialization: Eq. 2 for iteration ∈ {1, 2, . . .} do
Form value-improving pairs P from D ▷ Pairing: Eq. 3 for step in 1, 2, . . . ,M do
Sample a batch of value-improving pairs from P using Eq. 4 Compute the loss and update θ using gradient descent ▷ Learning for x ∈ X do Sample hypotheses y from datapool D Generate corrections y′ ∼ pθ(·|y, x, f(y)) Add all (x, y′, v(y′), f(y′)) to the datapool D ▷ Exploration: Eq. 5
Since direct supervision on how to improve hypotheses is not available, we design a new algorithm to train the corrector, which we refer to as self-corrective learning. The algorithm collects a pool of generations, pairs them and selects pairs of generation that increase in value and are nearby, then updates the corrector on these pairs. As training progresses, more generations are added to the pool using the current corrector. Algorithm 1 summarizes self-corrective learning, detailed below.
Initialization. Self-corrective learning begins with a generator p0(y0|x), a corrector pθ(y′|y, x) , a set of training prompts X , and a value function v : Y → R. Optionally, we can use additional feedback f : Y → F and learn pθ(y′|y, x, f(y)), where F is arbitrary. The algorithm initializes a datapool of (input, output, value, feedback) examples by using the generator to generate multiple outputs for each input. Formally,
Dx = {(x, y, v(y), f(y)) | for all y ∈ y1:N ∼ q(p0(·|x))}, D = ⋃ x∈X Dx, (2)
where y1:N denotes N outputs generated with decoding algorithm q (e.g. temperature sampling). When available, (x, y, v(y), f(y)) examples from another source (e.g. a dataset) can also be added.
Pairing. Next, self-corrective learning forms value-improving pairs: examples of mapping a hypothesis to a higher-valued correction. We use the datapool D to form a set of (input, hypothesis, correction) pairs. A pair is formed when an output has a higher value than another ∗:
Px = {(x, y, y′) | v(y) < v(y′) for all y, y′ ∈ Dx ×Dx}, P = ⋃ x∈X Px, (3)
Learning. Next, self-corrective learning selects (input, hypothesis, correction) pairs to update the corrector with. We sample an input, x ∼ U(X), then sample a (x, y, y′) pair proportional to its
∗We also store the value and feedback for y and y′ along with (x, y, y′), which we omit to reduce clutter.
improvement in value as well as the proximity between the hypothesis y and the correction y′:, P[(x, y, y′)|x] ∝ exp ( α · (v(y′)− v(y))︸ ︷︷ ︸
improvement
+β · s(y, y′)︸ ︷︷ ︸ proximity
) /Z(y), (4)
where s(y, y′) is a similarity function and Z(y) normalizes over the available corrections for y in Px. Increasing the hyperparameter α ∈ R≥0 puts more weight on targets that add more value, while increasing β ∈ R≥0 retains more similar targets. We update the corrector using the cross-entropy loss L(θ) = − log pθ(y′|y, x, f(y)) on batches sampled in this way. Exploration. During exploration, self-corrective learning adds new generations to the datapool by generating from the current corrector:
D′x = {(x, y′, v(y′), f(y′)) | for all y′ ∈ y′1:N ∼ q(pθ(·|y, x, f(y))}, D′ = ⋃ x∈X D′x (5)
and updating the datapool D ← D∪D′. The hypotheses y to correct can come from any source, e.g. newly sampled from the base generator, or from the datapool; we use the latter in our experiments.
Inference. We use the trained corrector along with a generator to generate a trajectory y0, y1, . . . , yT , and consider yT the final output. Since marginalizing over the intermediate generations in Eq. 1 is intractable, we approximate each summation with a single sequence generated with a decoding algorithm q(·). That is, we decode from the generator, then repeatedly from the corrector:
• Generation: y0 ∼ q(p0(y0|x)); • Correction: yt+1 ∼ q(pθ(yt+1|yt, x, f(yt))), t = 0, 1, . . . , T − 1. The stopping time T is either fixed, or when a target value is obtained (if v(y) is available).
3 EXPERIMENTS
We evaluate SELF-CORRECTION on a diversity of tasks: mathematical program synthesis, in which generations are strictly correct or incorrect, and generators typically have low performance; lexically-constrained generation, which allows for partial credit, and generators usually give partially-correct solutions (e.g. matching 3 out of 5 constraints); and toxicity control, where ‘correctness’ is more loosely defined, and the output space is much more open-ended. Our experiments are organized to study three settings:
1. Using self-correctors to improve upon generators (§3.1,3.2,3.3). 2. Correcting generators that are much larger than the corrector (§3.4). 3. Leveraging explicit feedback during training and inference (§3.5).
Next, we describe the self-correction setup and baselines for each task, along with their results. ∗
3.1 MATHEMATICAL PROGRAM SYNTHESIS
First, we consider mathematical program synthesis (Austin et al., 2021; Mishra et al., 2022). Given a natural language problem specification x, the task is to generate a program y that upon execution returns the correct answer to x. The task is challenging as it draws on language understanding, multiple-step mathematical problem solving (e.g. identifying a solution strategy, decomposing a problem), and leveraging symbolic tools (e.g. built-in operations, variables). Furthermore, the task demands a high level of precision, e.g. a single misplaced operation makes the program incorrect.
Experimental setup. As the corrector we use GPT-Neo 1.3B (Black et al., 2021), an open-source autoregressive language model. GPT-Neo is pre-trained on language and code (Gao et al., 2021), and hence is widely used for code-related generation (e.g. Chen et al. (2021); Ni et al. (2022); Mishra et al. (2022)). We consider two settings for the initial generator: (1) a separate fine-tuned instance of GPT-Neo 1.3B, and (2) few-shot prompted GPT-3 (Brown et al., 2020). For GPT-3, we evaluate the davinci and text-davinci-002 engines, representative of large (≈ 175B∗) generators that are state-of-the-art in related tasks (Wei et al., 2022). See the Appendix for additional details.
∗Code will be available at www.github.com/wellecks/self_correction. ∗Estimated size of davinci (https://blog.eleuther.ai/gpt3-model-sizes). Further details not available.
Self-correction setup. As the value function we use correctness, which is 1 when the program y executes and outputs the ground-truth answer and 0 otherwise. Our main experiments do not use explicit feedback, i.e. f(y) = ∅. At inference time, we study two settings for the corrector: (1) applying k corrections and selecting the final generation, (2) an oracle setting that only corrects a draft if the draft is incorrect. We use greedy decoding for the generator and corrector, and k = 1.
Datasets. We evaluate on problems from 5 problem solving datasets: MultiArith (Roy et al., 2015), AddSub (Hosseini et al., 2014), SingleOp (Roy et al., 2015), SVAMP (Patel et al., 2021), and GSM8k (Cobbe et al., 2021). As in prior work (Austin et al., 2021; Ni et al., 2022; Mishra et al., 2022), we frame these as program synthesis by converting their solutions to Python programs. We separate our experiments into three increasingly difficult settings:
1. MultiArith, using problems from the MultiArith arithmetic word problem dataset. 2. Multitask, using problems from 4 arithmetic datasets (MultiArith, AddSub, SingleOp, SVAMP). 3. GSM, using problems from the challenging GSM8k dataset.
For the MultiArith and Multitask settings, we make train/valid/test splits using 60/20/20% of the respective datasets. Similar to Ni et al. (2022), for the GSM setting we use the official GSM8k test split, and create a validation split using 20% of the training set. Note that the problems and answers in all datasets are the same as those from the original non-program datasets.
Baselines. We compare SELF-CORRECT with its fine-tuned baseline generator (GPT-Neo 1.3B) in all three settings. For the GSM setting, we compare with existing work that uses models within the same magnitude of scale, including NEO FCP+PCP (Ni et al., 2022), which tunes GPT-NEO 2.7B with additional self-sampled programs, and their fine-tuned GPT-NEO 2.7B baseline. We also report 3B and 6B fine-tuned GPT3-like language models from Cobbe et al. (2021), which were trained on the non-program version of GSM8k. We evaluate larger models later in (§3.4).
Results. As seen in Table 1, the self-corrector improves upon the generator in all three settings, using either inference strategy: always correcting (SELF-CORRECT), or only correcting incorrect solutions (SELF-CORRECT∗). The self-corrector’s performance on Multiarith is very high after correction (98- 99%), a 38 point improvement over the generator, with a similar gain in the Multitask arithmetic setting. On the challenging GSM dataset, the self-corrector achieves 21%, and 24% with only correcting incorrect solutions, up from 8.57% for the generator. Notably, this is higher than the larger 2.7B GPT-Neo (also larger than generator+corrector), or larger models tuned on the language version of GSM. The results show that self-corrective learning can improve task performance via training a corrector. Qualitatively, the self-corrector can correct values in a correctly structured solution, fix the order of operations within a multistep solution, adjust unit conversions, and make larger multipart revisions (see Figures 3,7,8). Notably, these are learned automatically.
3.2 LEXICALLY CONSTRAINED GENERATION
Next, we consider lexically constrained generation. Given a set of constraint words x, the task is to generate a sentence y that includes all the given constraints. Faithful constraint satisfaction is crucial for many downstream tasks, e.g., those that require converting information to text (McKeown, 1985).
Datasets and Metrics. We experiment on COMMONGEN (Lin et al., 2020) and E2E (Novikova et al., 2017). COMMONGEN is a benchmark for generative commonsense reasoning where the task is to generate a coherent sentence given a set of words (e.g., dog, catch). E2E involves converting structured inputs into natural language. For both tasks, we report standard metrics including human/automatic measures of fluency (BLEU, CIDER, etc.) as well as constraint coverage. We collect human measures of fluency on Amazon Mechanical Turk; see the Appendix for details.
Setup. We parameterize the base generator with GPT-2 Radford et al. (2019) (large-size for COMMONGEN and medium-size for E2E). We fine-tuned the generator for each task. As the value function for self-corrective learning we use coverage, i.e. the percentage of constraints that are present in the output. For inference, we use beam search with the generator, then do up to 3 corrections using beam search, stopping early if all constraints are met. See the Appendix for additional details.
Results. Table 2 shows the evaluation results. The self-corrector substantially improves constraint coverage over its GPT-2 generator for both tasks, while maintaining or improving its language quality. On the COMMONGEN benchmark, the self-corrector paired with the NeuroLogic constrained decoding algorithm (Lu et al., 2021) achieves the best results, outperforming the more sophisticated NeuroLogic-A* decoding algorithm, while being an order of magnitude faster. Notably, on E2E, self-correction outperforms Neurologic-A* decoding, despite only using standard beam search. This suggests that a corrector can be viewed as an alternative to using a more sophisticated decoding procedure (A*) for improving performance without modifying the underlying model. See Figure 9.
3.3 TOXICITY REDUCTION
Next, we consider the task of toxicity reduction (Gehman et al., 2020; Liu et al., 2021). Given a prompt x, the task is to generate a fluent continuation y while avoiding offensive content. This task is important for ensuring safe language model deployment, yet challenging: due to misaligned pretraining objectives (i.e. modeling internet text vs. non-toxic text), language models are suscepti-
SELF-CORRECT 0.171 0.026 11.81 0.80 0.83
Table 3: Toxicity reduction. GPT-2 is the base generator.
ble to generating toxic completions, even when prompted with seemingly innocuous text (Gehman et al., 2020). Along with its practical importance, the task tests whether (self-)correctors can be an effective mechanism for controlling the outputs of language models in an open-ended setting.
Datasets and Metrics. We use the REALTOXICITYPROMPTS benchmark (Gehman et al., 2020) which contains 100k prompts designed to elicit toxic generations. Following the experimental setup of Liu et al. (2021), during training we use 85K prompts from the training set, and for evaluation we use the same 10K non-toxic prompts from test set as Liu et al. (2021). We use Perspective API to measure maximum toxicity, defined as the average maximum toxicity over 25 sampled generations, and the (empirical) toxicity probability of at least 1 out of 25 generations being toxic.
Baselines. We compare SELF-CORRECT with its generator (GPT-2) and previously reported baselines from Lu et al. (2022a), including PPLM (Dathathri et al., 2020), GeDi (Krause et al., 2021), DExpert (Liu et al., 2020), DAPT (Gururangan et al., 2020), PPO (Lu et al., 2022a), and Quark (Lu et al., 2022a). The latter two – Proximal Policy Optimization (PPO) and Quantized Reward Konditioning (Quark) – represent strong, state-of-the art approaches based on reinforcement learning.
Setup. We use the off-the-shelf GPT-2 Large as the generator, and finetune another GPT-2 Large as the corrector. During inference, we use nucleus sampling with p = 0.9 to generate 25 samples for all baselines. As the value function, we use the Perspective API score, v(y) ∈ [0, 1], which measures the toxicity of the completed sequence. We do up to three corrections with the corrector model.
Results. Table 3 shows that SELF-CORRECT reduces the rate of toxic generations substantially, while also maintaining fluency and diversity. SELF-CORRECT outperforms all baselines. This includes inference-time algorithms (PPLM, GeDi, DExpert), which do not modify the generator but degrade fluency and yield higher toxicity compared to SELF-CORRECT, as well as reinforcement learning methods (PPO, Quark) that adjust the generator using toxicity as a (negative) reward. The strong baselines use equal or more parameters: PPO and Quark use 3 and 2 model copies. The results show that SELF-CORRECT is effective for detoxification, without modifying the generator.
3.4 CHANGING MODULES – CORRECTING GPT-3
Next, we show that a self-corrector can improve the outputs of a generator that is much larger than the corrector. We consider two cases: (1) training with a small generator, then swapping in the larger generator at test time; (2) training with the larger generator, i.e. using the large generator to initialize the datapool for self-corrective learning, then using the large generator at test time.
Toxicity. We evaluate case (1) for reducing the toxicity of a large generator (GPT-2 XL, GPT-3). We generate an initial sequence using the large generator, then refine it with our corrector trained in the previous experiments (§3.3). Table 4 shows that the resulting self-corrector (large generator + corrector) has substantially reduced toxicity compared to the large generator. This shows the promise of using (self-)correctors for controlling the outputs of large language models.
Math program synthesis. Table 4 shows results for math. Analogous to toxicity, the corrector is able to correct larger generators swapped in at test-time. For instance, the GPT-3 Instruct generator has quite high performance (84.90 Multitask, 36.80 GSM), which improves to 90.90 and 45.00,
respectively, by adding in a corrector. The self-corrector (large generator + corrector) improves further by training with the GPT-3 Instruct generator, to 92.75 and 45.92, respectively.
3.5 LEVERAGING EXPLICIT FEEDBACK
Next, we demonstrate SELF-CORRECT’s capacity to incorporate explicit natural language feedback. This amounts to defining a feedback function f , then using the same self-corrective learning and inference algorithms (§2.1) as in our preceding experiments (in those experiments, f returned ∅). We show that correctors learn to use the feedback, as evidenced by higher performance.
Toxicity. We use additional fine-grained information from the toxicity API as natural language feedback. Specifically, besides the overall toxicity score, Perspective API also provides scores for fine-grained attributes of toxicity (e.g. identity attack, profanity, flirtation, etc.). At training time, we compare the attribute scores from a hypothesis and its selected correction, and use the attribute with the largest decrease as natural language feedback (e.g. "decrease toxicity in profanity"). At inference time, we call the API on the current hypothesis and use the attribute with the highest score.
Lexical constraints. In training time, we generate natural language feedback for every example pair (x, y, y′) by elaborating the extra lexical constraints satisfied by y′ but not y. e.g. “adding constraint word: read”. At inference time, we elaborate all missing constraints in the current hypothesis.
Math program synthesis. Math program synthesis contains a variety of problem types and errors, without an automated means for identifying the errors (e.g. an API). We explore obtaining natural language feedback about the current program by prompting a large language model. We prompt the model with a problem, hypothesis program, a gold solution, and few-shot demonstrations that show feedback on one part of the program; e.g. In the initial guess, 3 should be subtracted. When the program is correct, the feedback is Correct. At inference time, we also use feedback from the language model. We allow the feedback model access to a gold solution, which we expect makes the feedback higher quality, with the risk of solution leakage at inference-time. Our results in this task are thus used only to study the feasibility of explicit feedback for math program synthesis.
Setup. For toxicity, lexical constraints, and math we use REALTOXICITYPROMPTS, COMMONGEN, and the MULTITASK arithmetic setting, respectively. We follow the setup of each task’s previous experiments (§3.3,§3.2,§3.1), except for math we use 5 correction iterations (previously 1). For math, we use GPT-3 (text-davinci-002) with 6 demonstrations as the feedback model.
Results. Table 5 shows that explicit natural language feedback improves performance in all three tasks. For toxicity, this means that providing fine-grained attributes (e.g. identity attack, profanity,
Figure 5: Math: multiple corrections.
Ablation Math COMMONGEN
SELF-CORRECT 78.24 94.55 ✗ proportional sampling 77.25 93.49 ✗ value pairing 62.35 91.76
Table 6: Effect of pairing and proportional sampling.
Exploration Multiarith Multitask GSM8k
✗ 89.20 73.49 17.60 ✓ 99.17 78.24 23.96
Table 7: Effect of exploration on program synthesis.
etc.) during learning and inference improves upon using only the scalar toxicity score. Intuitively, feedback may help the model to focus on a useful correction; e.g., see Figure 6.
3.6 ADDITIONAL ABLATIONS AND ANALYSIS
Effect of multiple corrections. Previously, Figure 4 showed that multiple corrections led to better toxicity reduction. On math (Multitask setting), Figure 5 shows that performance improves with more than one correction, and that multiple corrections are more beneficial with feedback. Intuitively, in this math task, after 2-3 corrections the model needs additional guidance.
Effect of pairing and proportional sampling. Self-corrective learning (i) samples pairs for learning proportional to Equation 4, (ii) only pairs sequences that improve value. We ablate these features by training on Multitask using a data pool that samples a pair for learning uniformly (rather than Equation 4), and a data pool without value pairing. Table 6 shows that both improve performance.
Effect of exploration. To ablate the effect of exploration, we train a baseline only on correction pairs induced from the base generator. Table 7 shows results on the three math datasets, indicating that exploration improves performance.
4 RELATED WORK
Self-Correction relates to work modeling text edits including supervised Wikipedia edits (Reid & Neubig, 2022; Faltings et al., 2021; Schick et al., 2022), unsupervised perturbations (Miao et al., 2019; Liu et al., 2020), training on human-written critiques (Saunders et al., 2022), or refining continuous variables (Lee et al., 2020; Li et al., 2022; Qin et al., 2022). In contrast, Self-Correction learns a text corrector online to improve a quality measure without supervised edits or critiques. Recently, Scheurer et al. (2022) use natural language feedback to improve generations. Denoising sequences is a common pretraining objective (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020), while self-correction ‘denoises’ generations to improve a scalar quality measure. Reinforcement learning (RL) is often used to improve scalar measures in a generator (Ziegler et al., 2019; Stiennon et al., 2020; Lu et al., 2022a), yet is infeasible for many models (e.g. those accessed by API), and uses only scalar feedback. Moreover, RL-tuned generators can be used within Self-Correction. Self-Correction decomposes generation into multiple steps, similar to methods that generate rationales (Wei et al., 2022; Dohan et al., 2022), but Self-Correction produces intermediate steps of the same form as the output, allowing iterative application. Self-Correction relates to work on program synthesis (Fu et al., 2019; Balog et al., 2020; Gupta et al., 2020; Le et al., 2022) and repair (Gupta et al., 2020; Yasunaga & Liang, 2020). Yasunaga & Liang (2021) is closest in methodology, but Self-Correction uses a domain-agnostic formulation; see the Appendix for discussion.
5 CONCLUSION
We introduced self-correctors, a class of models that decompose generation into initial generation and correction steps. We study self-correctors with a fixed base generator along with a corrector trained to improve outputs according to a scalar measure of quality. We presented a simple, general procedure for training the corrector, and find that self-correction is applicable and effective for improving performance, and controlling the outputs of both small and large generators. Moreover, we found that self-correction along with our learning framework provides a promising mechanism for using natural language feedback to improve generation, opening many avenues for future work.
A RELATED WORK
Self-correction provides a flexible framework for improving the performance of off-the-shelf and fine-tuned language models on a wide range of tasks by decomposing generation into a base generator and a corrector. Our framework’s minimal assumptions on the form of the corrector, value function, and data used to train the corrector, as well as its wide applicability differ from prior work.
Learning to fix code. Our work relates to two streams of research in the code domain. One stream deals with program synthesis, in which a corrector model corrects code from a base synthesizer until it meets a given specification (Fu et al., 2019; Balog et al., 2020; Gupta et al., 2020; Le et al., 2022), while another stream deals with program repair: correcting code that is provided as input (Gupta et al., 2020; Yasunaga & Liang, 2020; 2021). Recently, Le et al. (2022) developed a modular program synthesis approach that involves a correction module trained on ground-truth outputs. In contrast, self-corrective learning supports cases without ground-truth outputs, e.g. toxicity.
Closest to our methodology is Yasunaga & Liang (2021). Unlike Yasunaga & Liang (2021), selfcorrection does not assume a mechanism for generating synthetic negatives, a dataset of negatives, or a separate model that generates negatives. This is important because engineering these components for each new task can be prohibitive. Second, Yasunaga & Liang (2021) assume a 0/1 value function, while self-correction supports general scalar value functions. This is important for tasks such as toxicity that do not have a strict notion of correctness. Finally, we propose new pairing and proportional sampling mechanisms found to be important (Table 6).
Iterative text edits. Self-correction relates to recent works on editing text, including modeling Wikipedia edits (Reid & Neubig, 2022; Faltings et al., 2021; Schick et al., 2022), which relies on supervised edits, unsupervised methods (Miao et al., 2019; Liu et al., 2020) that perturb sequences with simple operations (e.g. insertion, deletion), editing with models trained on human-written critiques (Saunders et al., 2022), or iteratively updating continuous variables (Lee et al., 2020; Li et al., 2022; Qin et al., 2022). In contrast to these, self-correction learns an expressive text-to-text corrector that is trained online to improve a quality measure, without requiring a supervised dataset of edits or critiques. Recently, Scheurer et al. (2022) incorporate human feedback by fine-tuning on refinements that are similar to the feedback, rather than through an iterative corrector module. Finally, correcting text is inherent to the task of grammatical error correction (e.g. Lichtarge et al. (2019); Yasunaga et al. (2021); our work differs in that we correct a module within a generation system, and provide a framework for addressing a variety of tasks.
Denoising and reinforcement learning. Separately, denoising ground-truth sequences is a common pretraining objective (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020), while selfcorrection ‘denoises’ generations to improve a scalar quality measure. Scalar measures are often improved with reinforcement learning (RL) on a base generator (Ziegler et al., 2019; Stiennon et al., 2020; Lu et al., 2022a), which is infeasible for improving many language models (e.g. those accessed through an API), and uses only scalar feedback. Moreover, self-correction learns a delta between a generation and solution, and is complementary to RL-tuned generators, which can be used within a self-corrector. Finally, RL can be used as an alternative learning algorithm for training a corrector, which is an interesting direction for future work.
Modular generation. Self-correction decomposes generation into multiple steps, and is thus part of the general class of methods that decompose generation into a ‘cascade’ of modules (Dohan et al., 2022). Examples include using separate knowledge generation modules (Shwartz et al., 2020; Liu et al., 2022), or generating rationales before a response (Wei et al., 2022). Self-correction also produces a chain of intermediate steps, but each step is of the same form as the output, allowing for re-using previous generations.
B ADDITIONAL EXPERIMENTAL DETAILS
B.1 CROSS-EXPERIMENT DETAILS
In all of our experiments we use an off-the-shelf embedding similarity function from SentenceTransformers (Reimers & Gurevych, 2019): sentence-transformers/all-MiniLM-L6-v2.
B.2 MATHEMATICAL PROGRAM SYNTHESIS
We fine-tune a separate instance of GPT-Neo 1.3B as an initial generator, using the Huggingface library with default hyperparameters, except for evaluation steps, which we set to a small number to ensure a strong checkpoint is selected for each dataset. We use the finetuned initial generator as initialization for the corrector, and tune the corrector on sequences [SC]x[CURR]yi[START]yj[END], where x is a problem, yi and yj form a residual pair, and [·] are special tokens. The loss is on tokens after [START].
Feedback. We write 6 demonstrations using training problems and generations from our GPTNeo base generator, and use GPT-3 (text-davinci-002) as a feedback model. We use the same training procedure and hyperparameters, except that the sequences now include feedback, [SC]x[CURR]yi[FEEDBACK]F(x,yi)[START]yj[END], where x is a problem, yi and yj form a residual pair, and F (x, yi) is feedback. We include loss on tokens after [FEEDBACK].
B.3 LEXICALLY-CONSTRAINED GENERATION
Hyper-parameters. Table 8 and Table 9 show hyperparameters for CommonGen and E2E.
Human Evaluation. We evaluate fluency of generations in E2E task using human annotators on Amazon Mechanical Turk (AMT). We randomly sampled 100 instances, along with generations of different baselines and self-corrections. For each instance, we ask 3 annotators to evaluate the fluency of generations on a 3-point Likert scale. We aggregate annotations from 3 annotators using majority vote. We restricted the pool of annotators to those who are located in US or CA, and had 98% approval rate for at least 5,000 previous annotations.
Hyperparameter Assignment
Predictor GPT-2Large steps 6000 batch size 128 optimizer Adam learning rate 1.e−5 decoding alg. beam search (k=5)
Table 8: Hyperparameters for COMMONGEN.
Hyperparameter Assignment
Predictor GPT-2Medium steps 10000 batch size 100 optimizer Adam learning rate 1.e−5 decoding alg. beam search (k=5)
Table 9: Hyperparameters for E2E.
C ADDITIONAL RESULTS
D QUALITATIVE EXAMPLES | 1. What is the focus and contribution of the paper on text generation?
2. What are the strengths and weaknesses of the proposed self-correction method?
3. Do you have any concerns regarding the name of the method and its limitation in novelty?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any specific questions or suggestions for improving the method, such as incorporating feedback signals or providing more details on the experimental setting? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a self-correction method which trains a corrector to iteratively correct imperfect generation results. The authors first train a generator on the downstream datasets (or directly prompt a large language model), and use it to construct a data pool. Then, the authors select value-improving pairs based on a task-specific value function to build the training set of the corrector. Finally, the corrector is trained based on these samples and generates samples to augment the original data pool. Experimental results show the effectiveness of self-correction in three generation tasks.
Strengths And Weaknesses
Strengths:
This paper is well organized and easy to follow.
The proposed method can be applied to a wide range of text generation tasks. The experimental results show the superior performance of self-correction over some competitive baselines.
Weaknesses:
The name of the method “self-correction” is confusing for me, because the authors train a separate corrector to improve the base generator. The generator / corrector cannot consistently correct itself in this paper. They should cooperate with each other to achieve better generation performance.
From the perspective of correctors, the proposed method seems to train a text editing model (corrector) via pseudo-labeled data generated by a pre-trained model (generator). Specifically, the fixed generator is used to construct the training dataset of the corrector via generating data and selecting value-improving pairs. Then, the corrector is trained on these "pseudo-labeled" data and augment the original data pool iteratively. Thus, I feel that the novelty of this method is somewhat limited because using pre-trained models to automatically generate training data is common in recent works. I don’t find any specific design when training the corrector.
The feedback has been mentioned for many times in this paper. But this part is individual compared with the whole design. I don’t find any specific module to properly incorporate the feedback signal into the corrector.
The experimental setting may be unfair because the corrector has a relatively large amount of model parameters. Thus, the total number of parameters in self-correction (including the generator and the corrector) is significantly larger than that of other baselines.
Clarity, Quality, Novelty And Reproducibility
The authors should further clarify the method design and the experimental settings. The overall quality of this paper is OK. But in my view, the novelty of the proposed method is somewhat limited from the perspective of correctors. The reproducibility of this paper is degraded due to the lack of codes. |
ICLR | Title
Generating Sequences by Learning to Self-Correct
Abstract
Sequence generation applications require satisfying semantic constraints, such as ensuring that programs are correct, using certain keywords, or avoiding undesirable content. Language models, whether fine-tuned or prompted with few-shot demonstrations, frequently violate these constraints, and lack a mechanism to iteratively revise their outputs. Moreover, some powerful language models are of extreme scale or inaccessible, making it inefficient, if not infeasible, to update their parameters for task-specific adaptation. We present SELF-CORRECTION, an approach that decouples an imperfect base generator (an off-the-shelf language model or supervised sequence-to-sequence model) from a separate corrector that learns to iteratively correct imperfect generations. To train the corrector, we propose an online training procedure that can use either scalar or natural language feedback on intermediate imperfect generations. We show that SELFCORRECTION improves upon the base generator in three diverse generation tasks– mathematical program synthesis, lexically-constrained generation, and toxicity control– even when the corrector is much smaller than the base generator.
1 INTRODUCTION
The standard practice for natural language generation tasks is inherently single-pass: applying a decoding procedure to either a few-shot prompted language model or one tuned for a given task, then considering the generation as “finished” (e.g. Radford et al. (2019); Brown et al. (2020); Chen et al. (2021)). Powerful generation models often meet most of the task requirements, yet miss a few (e.g., omitting a subset of keywords), or generate incorrect hypotheses that nevertheless provide useful structure (e.g., a correct problem solving strategy with a missing step). However, after generating even a slightly sub-optimal sequence, the single-pass paradigm requires models to “start from scratch”, effectively discarding work already done. A more natural, intuitive approach is leveraging the generation as a useful starting point to refine into a higher quality output.
To formalize this intuition, we introduce Self-Correction for Sequence Generation. Figure 1 demonstrates its central principle: a generation model is re-framed as a base generator, which produces a reasonable initial hypothesis but does not need to solve the task in one pass, and a second module–the corrector–trained to make up the difference between the hypothesis and an optimal solution. Neither the generator nor the corrector must solve the full task in one pass, and the corrector can be applied multiple times to iteratively improve the output (§3.6). We propose a simple, general procedure for training the corrector (Figure 2) by pairing generator outputs with carefully selected targets. The result is a system which self-corrects, producing outputs through multiple generation passes and breaking the task into steps that can be solved by dedicated and efficient sub-systems.
Self-Correction builds on past work for correction in the code and text (e.g. Yasunaga et al. (2021); Faltings et al. (2021)) domains, but provides a unified formalism with minimal assumptions about
∗First authors, contributed equally. †Second authors, contributed equally.
data and feedback, which applies generally to diverse tasks. A corrector model improves the base generator on 3 such tasks in our experiments: mathematical program synthesis (§3.1), lexically constrained generation (§3.2), and toxicity reduction (§3.3). The trained corrector model even transfers to a larger generator with similar performance to training from scratch (§3.4). Finally, we explore introducing a third module to the Self-Correction system (§3.5)–explicitly using natural language feedback to guide corrections–with promising results. Self-Correction is an exciting path to build on the generations of strong models, with efficient, effective, and transferable corrector networks.
2 SELF-CORRECTING SEQUENCE GENERATORS
A typical autoregressive text generator (e.g. GPT-3 (Brown et al., 2020)) maps an input prompt to a distribution over outputs using a single parameterized module (e.g. a large transformer), p0(y|x). We explore an alternative that decomposes into two modules, a base generator, and a corrector,
p(y|x) = ∑ y0 p0(y0|x)︸ ︷︷ ︸ generator pθ(y|y0, x)︸ ︷︷ ︸ corrector
(1)
where the generator provides an initial hypothesis that is refined by the corrector. In practice, the corrector can be applied multiple times, p(yT |x) = ∑ y0 ∑ y1 · · · ∑ yT−1 p0(y0|x) ∏
t pθ(yt+1|yt, x). Since a model of this form can both generate and correct its generations, we call it a Self-Corrector.
Self-correctors have several unique properties compared to typical generators. First, a self-corrector decouples generation and correction, allowing us to freely parameterize each module – for instance, by prompting a single language model or using two different language models. In this paper, we develop a framework to train a separate corrector model (§2.1). We find that the resulting selfcorrector improves upon the generator alone (§3), even when the corrector is much smaller (§3.4).
Second, since the generator and the corrector are separated, we can keep the generator as a generalpurpose language model and train the corrector with different objectives for different task requirements. In §2.1, we propose a training algorithm for the corrector that is dedicated to improving generations, where the improvement can be in any aspect, measured by scalar values.
Third, the corrector can receive explicit feedback about intermediate generations to guide subsequent generations. Formally, p(y|x) = ∑ y0
p0(y0|x)pθ(y|y0, x, f(y0)), where f is the feedback. The feedback can be of many forms, e.g. a sentence, a compiler trace, etc. In contrast, a typical generator that generates in a single pass does not leverage feedback on its own generation. In this paper, we show that the corrector can learn to exploit explicit natural language feedback to achieve better performance (§3.5). Next, we describe our training framework of the corrector.
2.1 LEARNING A CORRECTOR
Our goal is to have the generator generate an initial hypothesis, then improve the hypothesis with the corrector (Eq. 1). We train the corrector to improve the quality of a hypothesis, while staying as close as possible to the original hypothesis. Here, quality is measured with a scalar value function v(y) which is accessible at training time (e.g. 0/1 indicator of program correctness, a toxicity score).
Algorithm 1 Self-corrective learning input Generator p0, corrector pθ , prompts X , value v(·), feedback f(·)
Initialize datapool D by sampling from p0 ▷ Initialization: Eq. 2 for iteration ∈ {1, 2, . . .} do
Form value-improving pairs P from D ▷ Pairing: Eq. 3 for step in 1, 2, . . . ,M do
Sample a batch of value-improving pairs from P using Eq. 4 Compute the loss and update θ using gradient descent ▷ Learning for x ∈ X do Sample hypotheses y from datapool D Generate corrections y′ ∼ pθ(·|y, x, f(y)) Add all (x, y′, v(y′), f(y′)) to the datapool D ▷ Exploration: Eq. 5
Since direct supervision on how to improve hypotheses is not available, we design a new algorithm to train the corrector, which we refer to as self-corrective learning. The algorithm collects a pool of generations, pairs them and selects pairs of generation that increase in value and are nearby, then updates the corrector on these pairs. As training progresses, more generations are added to the pool using the current corrector. Algorithm 1 summarizes self-corrective learning, detailed below.
Initialization. Self-corrective learning begins with a generator p0(y0|x), a corrector pθ(y′|y, x) , a set of training prompts X , and a value function v : Y → R. Optionally, we can use additional feedback f : Y → F and learn pθ(y′|y, x, f(y)), where F is arbitrary. The algorithm initializes a datapool of (input, output, value, feedback) examples by using the generator to generate multiple outputs for each input. Formally,
Dx = {(x, y, v(y), f(y)) | for all y ∈ y1:N ∼ q(p0(·|x))}, D = ⋃ x∈X Dx, (2)
where y1:N denotes N outputs generated with decoding algorithm q (e.g. temperature sampling). When available, (x, y, v(y), f(y)) examples from another source (e.g. a dataset) can also be added.
Pairing. Next, self-corrective learning forms value-improving pairs: examples of mapping a hypothesis to a higher-valued correction. We use the datapool D to form a set of (input, hypothesis, correction) pairs. A pair is formed when an output has a higher value than another ∗:
Px = {(x, y, y′) | v(y) < v(y′) for all y, y′ ∈ Dx ×Dx}, P = ⋃ x∈X Px, (3)
Learning. Next, self-corrective learning selects (input, hypothesis, correction) pairs to update the corrector with. We sample an input, x ∼ U(X), then sample a (x, y, y′) pair proportional to its
∗We also store the value and feedback for y and y′ along with (x, y, y′), which we omit to reduce clutter.
improvement in value as well as the proximity between the hypothesis y and the correction y′:, P[(x, y, y′)|x] ∝ exp ( α · (v(y′)− v(y))︸ ︷︷ ︸
improvement
+β · s(y, y′)︸ ︷︷ ︸ proximity
) /Z(y), (4)
where s(y, y′) is a similarity function and Z(y) normalizes over the available corrections for y in Px. Increasing the hyperparameter α ∈ R≥0 puts more weight on targets that add more value, while increasing β ∈ R≥0 retains more similar targets. We update the corrector using the cross-entropy loss L(θ) = − log pθ(y′|y, x, f(y)) on batches sampled in this way. Exploration. During exploration, self-corrective learning adds new generations to the datapool by generating from the current corrector:
D′x = {(x, y′, v(y′), f(y′)) | for all y′ ∈ y′1:N ∼ q(pθ(·|y, x, f(y))}, D′ = ⋃ x∈X D′x (5)
and updating the datapool D ← D∪D′. The hypotheses y to correct can come from any source, e.g. newly sampled from the base generator, or from the datapool; we use the latter in our experiments.
Inference. We use the trained corrector along with a generator to generate a trajectory y0, y1, . . . , yT , and consider yT the final output. Since marginalizing over the intermediate generations in Eq. 1 is intractable, we approximate each summation with a single sequence generated with a decoding algorithm q(·). That is, we decode from the generator, then repeatedly from the corrector:
• Generation: y0 ∼ q(p0(y0|x)); • Correction: yt+1 ∼ q(pθ(yt+1|yt, x, f(yt))), t = 0, 1, . . . , T − 1. The stopping time T is either fixed, or when a target value is obtained (if v(y) is available).
3 EXPERIMENTS
We evaluate SELF-CORRECTION on a diversity of tasks: mathematical program synthesis, in which generations are strictly correct or incorrect, and generators typically have low performance; lexically-constrained generation, which allows for partial credit, and generators usually give partially-correct solutions (e.g. matching 3 out of 5 constraints); and toxicity control, where ‘correctness’ is more loosely defined, and the output space is much more open-ended. Our experiments are organized to study three settings:
1. Using self-correctors to improve upon generators (§3.1,3.2,3.3). 2. Correcting generators that are much larger than the corrector (§3.4). 3. Leveraging explicit feedback during training and inference (§3.5).
Next, we describe the self-correction setup and baselines for each task, along with their results. ∗
3.1 MATHEMATICAL PROGRAM SYNTHESIS
First, we consider mathematical program synthesis (Austin et al., 2021; Mishra et al., 2022). Given a natural language problem specification x, the task is to generate a program y that upon execution returns the correct answer to x. The task is challenging as it draws on language understanding, multiple-step mathematical problem solving (e.g. identifying a solution strategy, decomposing a problem), and leveraging symbolic tools (e.g. built-in operations, variables). Furthermore, the task demands a high level of precision, e.g. a single misplaced operation makes the program incorrect.
Experimental setup. As the corrector we use GPT-Neo 1.3B (Black et al., 2021), an open-source autoregressive language model. GPT-Neo is pre-trained on language and code (Gao et al., 2021), and hence is widely used for code-related generation (e.g. Chen et al. (2021); Ni et al. (2022); Mishra et al. (2022)). We consider two settings for the initial generator: (1) a separate fine-tuned instance of GPT-Neo 1.3B, and (2) few-shot prompted GPT-3 (Brown et al., 2020). For GPT-3, we evaluate the davinci and text-davinci-002 engines, representative of large (≈ 175B∗) generators that are state-of-the-art in related tasks (Wei et al., 2022). See the Appendix for additional details.
∗Code will be available at www.github.com/wellecks/self_correction. ∗Estimated size of davinci (https://blog.eleuther.ai/gpt3-model-sizes). Further details not available.
Self-correction setup. As the value function we use correctness, which is 1 when the program y executes and outputs the ground-truth answer and 0 otherwise. Our main experiments do not use explicit feedback, i.e. f(y) = ∅. At inference time, we study two settings for the corrector: (1) applying k corrections and selecting the final generation, (2) an oracle setting that only corrects a draft if the draft is incorrect. We use greedy decoding for the generator and corrector, and k = 1.
Datasets. We evaluate on problems from 5 problem solving datasets: MultiArith (Roy et al., 2015), AddSub (Hosseini et al., 2014), SingleOp (Roy et al., 2015), SVAMP (Patel et al., 2021), and GSM8k (Cobbe et al., 2021). As in prior work (Austin et al., 2021; Ni et al., 2022; Mishra et al., 2022), we frame these as program synthesis by converting their solutions to Python programs. We separate our experiments into three increasingly difficult settings:
1. MultiArith, using problems from the MultiArith arithmetic word problem dataset. 2. Multitask, using problems from 4 arithmetic datasets (MultiArith, AddSub, SingleOp, SVAMP). 3. GSM, using problems from the challenging GSM8k dataset.
For the MultiArith and Multitask settings, we make train/valid/test splits using 60/20/20% of the respective datasets. Similar to Ni et al. (2022), for the GSM setting we use the official GSM8k test split, and create a validation split using 20% of the training set. Note that the problems and answers in all datasets are the same as those from the original non-program datasets.
Baselines. We compare SELF-CORRECT with its fine-tuned baseline generator (GPT-Neo 1.3B) in all three settings. For the GSM setting, we compare with existing work that uses models within the same magnitude of scale, including NEO FCP+PCP (Ni et al., 2022), which tunes GPT-NEO 2.7B with additional self-sampled programs, and their fine-tuned GPT-NEO 2.7B baseline. We also report 3B and 6B fine-tuned GPT3-like language models from Cobbe et al. (2021), which were trained on the non-program version of GSM8k. We evaluate larger models later in (§3.4).
Results. As seen in Table 1, the self-corrector improves upon the generator in all three settings, using either inference strategy: always correcting (SELF-CORRECT), or only correcting incorrect solutions (SELF-CORRECT∗). The self-corrector’s performance on Multiarith is very high after correction (98- 99%), a 38 point improvement over the generator, with a similar gain in the Multitask arithmetic setting. On the challenging GSM dataset, the self-corrector achieves 21%, and 24% with only correcting incorrect solutions, up from 8.57% for the generator. Notably, this is higher than the larger 2.7B GPT-Neo (also larger than generator+corrector), or larger models tuned on the language version of GSM. The results show that self-corrective learning can improve task performance via training a corrector. Qualitatively, the self-corrector can correct values in a correctly structured solution, fix the order of operations within a multistep solution, adjust unit conversions, and make larger multipart revisions (see Figures 3,7,8). Notably, these are learned automatically.
3.2 LEXICALLY CONSTRAINED GENERATION
Next, we consider lexically constrained generation. Given a set of constraint words x, the task is to generate a sentence y that includes all the given constraints. Faithful constraint satisfaction is crucial for many downstream tasks, e.g., those that require converting information to text (McKeown, 1985).
Datasets and Metrics. We experiment on COMMONGEN (Lin et al., 2020) and E2E (Novikova et al., 2017). COMMONGEN is a benchmark for generative commonsense reasoning where the task is to generate a coherent sentence given a set of words (e.g., dog, catch). E2E involves converting structured inputs into natural language. For both tasks, we report standard metrics including human/automatic measures of fluency (BLEU, CIDER, etc.) as well as constraint coverage. We collect human measures of fluency on Amazon Mechanical Turk; see the Appendix for details.
Setup. We parameterize the base generator with GPT-2 Radford et al. (2019) (large-size for COMMONGEN and medium-size for E2E). We fine-tuned the generator for each task. As the value function for self-corrective learning we use coverage, i.e. the percentage of constraints that are present in the output. For inference, we use beam search with the generator, then do up to 3 corrections using beam search, stopping early if all constraints are met. See the Appendix for additional details.
Results. Table 2 shows the evaluation results. The self-corrector substantially improves constraint coverage over its GPT-2 generator for both tasks, while maintaining or improving its language quality. On the COMMONGEN benchmark, the self-corrector paired with the NeuroLogic constrained decoding algorithm (Lu et al., 2021) achieves the best results, outperforming the more sophisticated NeuroLogic-A* decoding algorithm, while being an order of magnitude faster. Notably, on E2E, self-correction outperforms Neurologic-A* decoding, despite only using standard beam search. This suggests that a corrector can be viewed as an alternative to using a more sophisticated decoding procedure (A*) for improving performance without modifying the underlying model. See Figure 9.
3.3 TOXICITY REDUCTION
Next, we consider the task of toxicity reduction (Gehman et al., 2020; Liu et al., 2021). Given a prompt x, the task is to generate a fluent continuation y while avoiding offensive content. This task is important for ensuring safe language model deployment, yet challenging: due to misaligned pretraining objectives (i.e. modeling internet text vs. non-toxic text), language models are suscepti-
SELF-CORRECT 0.171 0.026 11.81 0.80 0.83
Table 3: Toxicity reduction. GPT-2 is the base generator.
ble to generating toxic completions, even when prompted with seemingly innocuous text (Gehman et al., 2020). Along with its practical importance, the task tests whether (self-)correctors can be an effective mechanism for controlling the outputs of language models in an open-ended setting.
Datasets and Metrics. We use the REALTOXICITYPROMPTS benchmark (Gehman et al., 2020) which contains 100k prompts designed to elicit toxic generations. Following the experimental setup of Liu et al. (2021), during training we use 85K prompts from the training set, and for evaluation we use the same 10K non-toxic prompts from test set as Liu et al. (2021). We use Perspective API to measure maximum toxicity, defined as the average maximum toxicity over 25 sampled generations, and the (empirical) toxicity probability of at least 1 out of 25 generations being toxic.
Baselines. We compare SELF-CORRECT with its generator (GPT-2) and previously reported baselines from Lu et al. (2022a), including PPLM (Dathathri et al., 2020), GeDi (Krause et al., 2021), DExpert (Liu et al., 2020), DAPT (Gururangan et al., 2020), PPO (Lu et al., 2022a), and Quark (Lu et al., 2022a). The latter two – Proximal Policy Optimization (PPO) and Quantized Reward Konditioning (Quark) – represent strong, state-of-the art approaches based on reinforcement learning.
Setup. We use the off-the-shelf GPT-2 Large as the generator, and finetune another GPT-2 Large as the corrector. During inference, we use nucleus sampling with p = 0.9 to generate 25 samples for all baselines. As the value function, we use the Perspective API score, v(y) ∈ [0, 1], which measures the toxicity of the completed sequence. We do up to three corrections with the corrector model.
Results. Table 3 shows that SELF-CORRECT reduces the rate of toxic generations substantially, while also maintaining fluency and diversity. SELF-CORRECT outperforms all baselines. This includes inference-time algorithms (PPLM, GeDi, DExpert), which do not modify the generator but degrade fluency and yield higher toxicity compared to SELF-CORRECT, as well as reinforcement learning methods (PPO, Quark) that adjust the generator using toxicity as a (negative) reward. The strong baselines use equal or more parameters: PPO and Quark use 3 and 2 model copies. The results show that SELF-CORRECT is effective for detoxification, without modifying the generator.
3.4 CHANGING MODULES – CORRECTING GPT-3
Next, we show that a self-corrector can improve the outputs of a generator that is much larger than the corrector. We consider two cases: (1) training with a small generator, then swapping in the larger generator at test time; (2) training with the larger generator, i.e. using the large generator to initialize the datapool for self-corrective learning, then using the large generator at test time.
Toxicity. We evaluate case (1) for reducing the toxicity of a large generator (GPT-2 XL, GPT-3). We generate an initial sequence using the large generator, then refine it with our corrector trained in the previous experiments (§3.3). Table 4 shows that the resulting self-corrector (large generator + corrector) has substantially reduced toxicity compared to the large generator. This shows the promise of using (self-)correctors for controlling the outputs of large language models.
Math program synthesis. Table 4 shows results for math. Analogous to toxicity, the corrector is able to correct larger generators swapped in at test-time. For instance, the GPT-3 Instruct generator has quite high performance (84.90 Multitask, 36.80 GSM), which improves to 90.90 and 45.00,
respectively, by adding in a corrector. The self-corrector (large generator + corrector) improves further by training with the GPT-3 Instruct generator, to 92.75 and 45.92, respectively.
3.5 LEVERAGING EXPLICIT FEEDBACK
Next, we demonstrate SELF-CORRECT’s capacity to incorporate explicit natural language feedback. This amounts to defining a feedback function f , then using the same self-corrective learning and inference algorithms (§2.1) as in our preceding experiments (in those experiments, f returned ∅). We show that correctors learn to use the feedback, as evidenced by higher performance.
Toxicity. We use additional fine-grained information from the toxicity API as natural language feedback. Specifically, besides the overall toxicity score, Perspective API also provides scores for fine-grained attributes of toxicity (e.g. identity attack, profanity, flirtation, etc.). At training time, we compare the attribute scores from a hypothesis and its selected correction, and use the attribute with the largest decrease as natural language feedback (e.g. "decrease toxicity in profanity"). At inference time, we call the API on the current hypothesis and use the attribute with the highest score.
Lexical constraints. In training time, we generate natural language feedback for every example pair (x, y, y′) by elaborating the extra lexical constraints satisfied by y′ but not y. e.g. “adding constraint word: read”. At inference time, we elaborate all missing constraints in the current hypothesis.
Math program synthesis. Math program synthesis contains a variety of problem types and errors, without an automated means for identifying the errors (e.g. an API). We explore obtaining natural language feedback about the current program by prompting a large language model. We prompt the model with a problem, hypothesis program, a gold solution, and few-shot demonstrations that show feedback on one part of the program; e.g. In the initial guess, 3 should be subtracted. When the program is correct, the feedback is Correct. At inference time, we also use feedback from the language model. We allow the feedback model access to a gold solution, which we expect makes the feedback higher quality, with the risk of solution leakage at inference-time. Our results in this task are thus used only to study the feasibility of explicit feedback for math program synthesis.
Setup. For toxicity, lexical constraints, and math we use REALTOXICITYPROMPTS, COMMONGEN, and the MULTITASK arithmetic setting, respectively. We follow the setup of each task’s previous experiments (§3.3,§3.2,§3.1), except for math we use 5 correction iterations (previously 1). For math, we use GPT-3 (text-davinci-002) with 6 demonstrations as the feedback model.
Results. Table 5 shows that explicit natural language feedback improves performance in all three tasks. For toxicity, this means that providing fine-grained attributes (e.g. identity attack, profanity,
Figure 5: Math: multiple corrections.
Ablation Math COMMONGEN
SELF-CORRECT 78.24 94.55 ✗ proportional sampling 77.25 93.49 ✗ value pairing 62.35 91.76
Table 6: Effect of pairing and proportional sampling.
Exploration Multiarith Multitask GSM8k
✗ 89.20 73.49 17.60 ✓ 99.17 78.24 23.96
Table 7: Effect of exploration on program synthesis.
etc.) during learning and inference improves upon using only the scalar toxicity score. Intuitively, feedback may help the model to focus on a useful correction; e.g., see Figure 6.
3.6 ADDITIONAL ABLATIONS AND ANALYSIS
Effect of multiple corrections. Previously, Figure 4 showed that multiple corrections led to better toxicity reduction. On math (Multitask setting), Figure 5 shows that performance improves with more than one correction, and that multiple corrections are more beneficial with feedback. Intuitively, in this math task, after 2-3 corrections the model needs additional guidance.
Effect of pairing and proportional sampling. Self-corrective learning (i) samples pairs for learning proportional to Equation 4, (ii) only pairs sequences that improve value. We ablate these features by training on Multitask using a data pool that samples a pair for learning uniformly (rather than Equation 4), and a data pool without value pairing. Table 6 shows that both improve performance.
Effect of exploration. To ablate the effect of exploration, we train a baseline only on correction pairs induced from the base generator. Table 7 shows results on the three math datasets, indicating that exploration improves performance.
4 RELATED WORK
Self-Correction relates to work modeling text edits including supervised Wikipedia edits (Reid & Neubig, 2022; Faltings et al., 2021; Schick et al., 2022), unsupervised perturbations (Miao et al., 2019; Liu et al., 2020), training on human-written critiques (Saunders et al., 2022), or refining continuous variables (Lee et al., 2020; Li et al., 2022; Qin et al., 2022). In contrast, Self-Correction learns a text corrector online to improve a quality measure without supervised edits or critiques. Recently, Scheurer et al. (2022) use natural language feedback to improve generations. Denoising sequences is a common pretraining objective (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020), while self-correction ‘denoises’ generations to improve a scalar quality measure. Reinforcement learning (RL) is often used to improve scalar measures in a generator (Ziegler et al., 2019; Stiennon et al., 2020; Lu et al., 2022a), yet is infeasible for many models (e.g. those accessed by API), and uses only scalar feedback. Moreover, RL-tuned generators can be used within Self-Correction. Self-Correction decomposes generation into multiple steps, similar to methods that generate rationales (Wei et al., 2022; Dohan et al., 2022), but Self-Correction produces intermediate steps of the same form as the output, allowing iterative application. Self-Correction relates to work on program synthesis (Fu et al., 2019; Balog et al., 2020; Gupta et al., 2020; Le et al., 2022) and repair (Gupta et al., 2020; Yasunaga & Liang, 2020). Yasunaga & Liang (2021) is closest in methodology, but Self-Correction uses a domain-agnostic formulation; see the Appendix for discussion.
5 CONCLUSION
We introduced self-correctors, a class of models that decompose generation into initial generation and correction steps. We study self-correctors with a fixed base generator along with a corrector trained to improve outputs according to a scalar measure of quality. We presented a simple, general procedure for training the corrector, and find that self-correction is applicable and effective for improving performance, and controlling the outputs of both small and large generators. Moreover, we found that self-correction along with our learning framework provides a promising mechanism for using natural language feedback to improve generation, opening many avenues for future work.
A RELATED WORK
Self-correction provides a flexible framework for improving the performance of off-the-shelf and fine-tuned language models on a wide range of tasks by decomposing generation into a base generator and a corrector. Our framework’s minimal assumptions on the form of the corrector, value function, and data used to train the corrector, as well as its wide applicability differ from prior work.
Learning to fix code. Our work relates to two streams of research in the code domain. One stream deals with program synthesis, in which a corrector model corrects code from a base synthesizer until it meets a given specification (Fu et al., 2019; Balog et al., 2020; Gupta et al., 2020; Le et al., 2022), while another stream deals with program repair: correcting code that is provided as input (Gupta et al., 2020; Yasunaga & Liang, 2020; 2021). Recently, Le et al. (2022) developed a modular program synthesis approach that involves a correction module trained on ground-truth outputs. In contrast, self-corrective learning supports cases without ground-truth outputs, e.g. toxicity.
Closest to our methodology is Yasunaga & Liang (2021). Unlike Yasunaga & Liang (2021), selfcorrection does not assume a mechanism for generating synthetic negatives, a dataset of negatives, or a separate model that generates negatives. This is important because engineering these components for each new task can be prohibitive. Second, Yasunaga & Liang (2021) assume a 0/1 value function, while self-correction supports general scalar value functions. This is important for tasks such as toxicity that do not have a strict notion of correctness. Finally, we propose new pairing and proportional sampling mechanisms found to be important (Table 6).
Iterative text edits. Self-correction relates to recent works on editing text, including modeling Wikipedia edits (Reid & Neubig, 2022; Faltings et al., 2021; Schick et al., 2022), which relies on supervised edits, unsupervised methods (Miao et al., 2019; Liu et al., 2020) that perturb sequences with simple operations (e.g. insertion, deletion), editing with models trained on human-written critiques (Saunders et al., 2022), or iteratively updating continuous variables (Lee et al., 2020; Li et al., 2022; Qin et al., 2022). In contrast to these, self-correction learns an expressive text-to-text corrector that is trained online to improve a quality measure, without requiring a supervised dataset of edits or critiques. Recently, Scheurer et al. (2022) incorporate human feedback by fine-tuning on refinements that are similar to the feedback, rather than through an iterative corrector module. Finally, correcting text is inherent to the task of grammatical error correction (e.g. Lichtarge et al. (2019); Yasunaga et al. (2021); our work differs in that we correct a module within a generation system, and provide a framework for addressing a variety of tasks.
Denoising and reinforcement learning. Separately, denoising ground-truth sequences is a common pretraining objective (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020), while selfcorrection ‘denoises’ generations to improve a scalar quality measure. Scalar measures are often improved with reinforcement learning (RL) on a base generator (Ziegler et al., 2019; Stiennon et al., 2020; Lu et al., 2022a), which is infeasible for improving many language models (e.g. those accessed through an API), and uses only scalar feedback. Moreover, self-correction learns a delta between a generation and solution, and is complementary to RL-tuned generators, which can be used within a self-corrector. Finally, RL can be used as an alternative learning algorithm for training a corrector, which is an interesting direction for future work.
Modular generation. Self-correction decomposes generation into multiple steps, and is thus part of the general class of methods that decompose generation into a ‘cascade’ of modules (Dohan et al., 2022). Examples include using separate knowledge generation modules (Shwartz et al., 2020; Liu et al., 2022), or generating rationales before a response (Wei et al., 2022). Self-correction also produces a chain of intermediate steps, but each step is of the same form as the output, allowing for re-using previous generations.
B ADDITIONAL EXPERIMENTAL DETAILS
B.1 CROSS-EXPERIMENT DETAILS
In all of our experiments we use an off-the-shelf embedding similarity function from SentenceTransformers (Reimers & Gurevych, 2019): sentence-transformers/all-MiniLM-L6-v2.
B.2 MATHEMATICAL PROGRAM SYNTHESIS
We fine-tune a separate instance of GPT-Neo 1.3B as an initial generator, using the Huggingface library with default hyperparameters, except for evaluation steps, which we set to a small number to ensure a strong checkpoint is selected for each dataset. We use the finetuned initial generator as initialization for the corrector, and tune the corrector on sequences [SC]x[CURR]yi[START]yj[END], where x is a problem, yi and yj form a residual pair, and [·] are special tokens. The loss is on tokens after [START].
Feedback. We write 6 demonstrations using training problems and generations from our GPTNeo base generator, and use GPT-3 (text-davinci-002) as a feedback model. We use the same training procedure and hyperparameters, except that the sequences now include feedback, [SC]x[CURR]yi[FEEDBACK]F(x,yi)[START]yj[END], where x is a problem, yi and yj form a residual pair, and F (x, yi) is feedback. We include loss on tokens after [FEEDBACK].
B.3 LEXICALLY-CONSTRAINED GENERATION
Hyper-parameters. Table 8 and Table 9 show hyperparameters for CommonGen and E2E.
Human Evaluation. We evaluate fluency of generations in E2E task using human annotators on Amazon Mechanical Turk (AMT). We randomly sampled 100 instances, along with generations of different baselines and self-corrections. For each instance, we ask 3 annotators to evaluate the fluency of generations on a 3-point Likert scale. We aggregate annotations from 3 annotators using majority vote. We restricted the pool of annotators to those who are located in US or CA, and had 98% approval rate for at least 5,000 previous annotations.
Hyperparameter Assignment
Predictor GPT-2Large steps 6000 batch size 128 optimizer Adam learning rate 1.e−5 decoding alg. beam search (k=5)
Table 8: Hyperparameters for COMMONGEN.
Hyperparameter Assignment
Predictor GPT-2Medium steps 10000 batch size 100 optimizer Adam learning rate 1.e−5 decoding alg. beam search (k=5)
Table 9: Hyperparameters for E2E.
C ADDITIONAL RESULTS
D QUALITATIVE EXAMPLES | 1. What is the focus and contribution of the paper regarding generating sequences with language models?
2. What are the strengths of the proposed method, particularly in terms of its elegance and training data generation?
3. What are the weaknesses of the paper, especially regarding the baselines and their understanding?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any related works or papers that the authors should consider or acknowledge? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper suggests a new method to generate sequences with language models. Instead of sampling directly from the model, the process first generates a candidate and then revises the candidate using a "self-correction" model (possibly in multiple rounds). The core of the paper is an elegant method to generate training data for self-correcting models. Starting with a generative language model, the method generates candidate answers. The idea is to select the candidate answer that is "wrong" (i.e. of low quality) but most similar to the correct answer (=ground truth, which is assumed to be available for the training set). This forms a new training example that consists of the original input, the flawed candidate and the improved candidate (=ground truth).
The similarity between the selected candidate answer and the correct answer is important to make the correction related to the originally generated sequence, which the paper confirms in an ablation.
Strengths And Weaknesses
Pros: Great idea, generally great set of experiments.
Cons: I was not able to understand the baselines. Since this paper assumes a dataset with ground truth answers. So, an obvious baseline for this approach would be a model that is fine-tuned on the ground truth. I was not able to understand which of the baselines were fine-tuned and which of them relied on few-shot prompting. I kindly ask the authors to clarify this point (or provide the additional baseline). I will raise my score significantly, if this can be clarified.
Clarity, Quality, Novelty And Reproducibility
I consider the idea how to generate data for training self-correcting models the main novelty. This is a significant and clearly expressed idea that might help a wide range of applications.
The paper is written very well and the authors promise to release the source code upon acceptance of the paper.
The authors might find that this paper is related: https://arxiv.org/pdf/2003.10555.pdf. |
ICLR | Title
Generating Sequences by Learning to Self-Correct
Abstract
Sequence generation applications require satisfying semantic constraints, such as ensuring that programs are correct, using certain keywords, or avoiding undesirable content. Language models, whether fine-tuned or prompted with few-shot demonstrations, frequently violate these constraints, and lack a mechanism to iteratively revise their outputs. Moreover, some powerful language models are of extreme scale or inaccessible, making it inefficient, if not infeasible, to update their parameters for task-specific adaptation. We present SELF-CORRECTION, an approach that decouples an imperfect base generator (an off-the-shelf language model or supervised sequence-to-sequence model) from a separate corrector that learns to iteratively correct imperfect generations. To train the corrector, we propose an online training procedure that can use either scalar or natural language feedback on intermediate imperfect generations. We show that SELFCORRECTION improves upon the base generator in three diverse generation tasks– mathematical program synthesis, lexically-constrained generation, and toxicity control– even when the corrector is much smaller than the base generator.
1 INTRODUCTION
The standard practice for natural language generation tasks is inherently single-pass: applying a decoding procedure to either a few-shot prompted language model or one tuned for a given task, then considering the generation as “finished” (e.g. Radford et al. (2019); Brown et al. (2020); Chen et al. (2021)). Powerful generation models often meet most of the task requirements, yet miss a few (e.g., omitting a subset of keywords), or generate incorrect hypotheses that nevertheless provide useful structure (e.g., a correct problem solving strategy with a missing step). However, after generating even a slightly sub-optimal sequence, the single-pass paradigm requires models to “start from scratch”, effectively discarding work already done. A more natural, intuitive approach is leveraging the generation as a useful starting point to refine into a higher quality output.
To formalize this intuition, we introduce Self-Correction for Sequence Generation. Figure 1 demonstrates its central principle: a generation model is re-framed as a base generator, which produces a reasonable initial hypothesis but does not need to solve the task in one pass, and a second module–the corrector–trained to make up the difference between the hypothesis and an optimal solution. Neither the generator nor the corrector must solve the full task in one pass, and the corrector can be applied multiple times to iteratively improve the output (§3.6). We propose a simple, general procedure for training the corrector (Figure 2) by pairing generator outputs with carefully selected targets. The result is a system which self-corrects, producing outputs through multiple generation passes and breaking the task into steps that can be solved by dedicated and efficient sub-systems.
Self-Correction builds on past work for correction in the code and text (e.g. Yasunaga et al. (2021); Faltings et al. (2021)) domains, but provides a unified formalism with minimal assumptions about
∗First authors, contributed equally. †Second authors, contributed equally.
data and feedback, which applies generally to diverse tasks. A corrector model improves the base generator on 3 such tasks in our experiments: mathematical program synthesis (§3.1), lexically constrained generation (§3.2), and toxicity reduction (§3.3). The trained corrector model even transfers to a larger generator with similar performance to training from scratch (§3.4). Finally, we explore introducing a third module to the Self-Correction system (§3.5)–explicitly using natural language feedback to guide corrections–with promising results. Self-Correction is an exciting path to build on the generations of strong models, with efficient, effective, and transferable corrector networks.
2 SELF-CORRECTING SEQUENCE GENERATORS
A typical autoregressive text generator (e.g. GPT-3 (Brown et al., 2020)) maps an input prompt to a distribution over outputs using a single parameterized module (e.g. a large transformer), p0(y|x). We explore an alternative that decomposes into two modules, a base generator, and a corrector,
p(y|x) = ∑ y0 p0(y0|x)︸ ︷︷ ︸ generator pθ(y|y0, x)︸ ︷︷ ︸ corrector
(1)
where the generator provides an initial hypothesis that is refined by the corrector. In practice, the corrector can be applied multiple times, p(yT |x) = ∑ y0 ∑ y1 · · · ∑ yT−1 p0(y0|x) ∏
t pθ(yt+1|yt, x). Since a model of this form can both generate and correct its generations, we call it a Self-Corrector.
Self-correctors have several unique properties compared to typical generators. First, a self-corrector decouples generation and correction, allowing us to freely parameterize each module – for instance, by prompting a single language model or using two different language models. In this paper, we develop a framework to train a separate corrector model (§2.1). We find that the resulting selfcorrector improves upon the generator alone (§3), even when the corrector is much smaller (§3.4).
Second, since the generator and the corrector are separated, we can keep the generator as a generalpurpose language model and train the corrector with different objectives for different task requirements. In §2.1, we propose a training algorithm for the corrector that is dedicated to improving generations, where the improvement can be in any aspect, measured by scalar values.
Third, the corrector can receive explicit feedback about intermediate generations to guide subsequent generations. Formally, p(y|x) = ∑ y0
p0(y0|x)pθ(y|y0, x, f(y0)), where f is the feedback. The feedback can be of many forms, e.g. a sentence, a compiler trace, etc. In contrast, a typical generator that generates in a single pass does not leverage feedback on its own generation. In this paper, we show that the corrector can learn to exploit explicit natural language feedback to achieve better performance (§3.5). Next, we describe our training framework of the corrector.
2.1 LEARNING A CORRECTOR
Our goal is to have the generator generate an initial hypothesis, then improve the hypothesis with the corrector (Eq. 1). We train the corrector to improve the quality of a hypothesis, while staying as close as possible to the original hypothesis. Here, quality is measured with a scalar value function v(y) which is accessible at training time (e.g. 0/1 indicator of program correctness, a toxicity score).
Algorithm 1 Self-corrective learning input Generator p0, corrector pθ , prompts X , value v(·), feedback f(·)
Initialize datapool D by sampling from p0 ▷ Initialization: Eq. 2 for iteration ∈ {1, 2, . . .} do
Form value-improving pairs P from D ▷ Pairing: Eq. 3 for step in 1, 2, . . . ,M do
Sample a batch of value-improving pairs from P using Eq. 4 Compute the loss and update θ using gradient descent ▷ Learning for x ∈ X do Sample hypotheses y from datapool D Generate corrections y′ ∼ pθ(·|y, x, f(y)) Add all (x, y′, v(y′), f(y′)) to the datapool D ▷ Exploration: Eq. 5
Since direct supervision on how to improve hypotheses is not available, we design a new algorithm to train the corrector, which we refer to as self-corrective learning. The algorithm collects a pool of generations, pairs them and selects pairs of generation that increase in value and are nearby, then updates the corrector on these pairs. As training progresses, more generations are added to the pool using the current corrector. Algorithm 1 summarizes self-corrective learning, detailed below.
Initialization. Self-corrective learning begins with a generator p0(y0|x), a corrector pθ(y′|y, x) , a set of training prompts X , and a value function v : Y → R. Optionally, we can use additional feedback f : Y → F and learn pθ(y′|y, x, f(y)), where F is arbitrary. The algorithm initializes a datapool of (input, output, value, feedback) examples by using the generator to generate multiple outputs for each input. Formally,
Dx = {(x, y, v(y), f(y)) | for all y ∈ y1:N ∼ q(p0(·|x))}, D = ⋃ x∈X Dx, (2)
where y1:N denotes N outputs generated with decoding algorithm q (e.g. temperature sampling). When available, (x, y, v(y), f(y)) examples from another source (e.g. a dataset) can also be added.
Pairing. Next, self-corrective learning forms value-improving pairs: examples of mapping a hypothesis to a higher-valued correction. We use the datapool D to form a set of (input, hypothesis, correction) pairs. A pair is formed when an output has a higher value than another ∗:
Px = {(x, y, y′) | v(y) < v(y′) for all y, y′ ∈ Dx ×Dx}, P = ⋃ x∈X Px, (3)
Learning. Next, self-corrective learning selects (input, hypothesis, correction) pairs to update the corrector with. We sample an input, x ∼ U(X), then sample a (x, y, y′) pair proportional to its
∗We also store the value and feedback for y and y′ along with (x, y, y′), which we omit to reduce clutter.
improvement in value as well as the proximity between the hypothesis y and the correction y′:, P[(x, y, y′)|x] ∝ exp ( α · (v(y′)− v(y))︸ ︷︷ ︸
improvement
+β · s(y, y′)︸ ︷︷ ︸ proximity
) /Z(y), (4)
where s(y, y′) is a similarity function and Z(y) normalizes over the available corrections for y in Px. Increasing the hyperparameter α ∈ R≥0 puts more weight on targets that add more value, while increasing β ∈ R≥0 retains more similar targets. We update the corrector using the cross-entropy loss L(θ) = − log pθ(y′|y, x, f(y)) on batches sampled in this way. Exploration. During exploration, self-corrective learning adds new generations to the datapool by generating from the current corrector:
D′x = {(x, y′, v(y′), f(y′)) | for all y′ ∈ y′1:N ∼ q(pθ(·|y, x, f(y))}, D′ = ⋃ x∈X D′x (5)
and updating the datapool D ← D∪D′. The hypotheses y to correct can come from any source, e.g. newly sampled from the base generator, or from the datapool; we use the latter in our experiments.
Inference. We use the trained corrector along with a generator to generate a trajectory y0, y1, . . . , yT , and consider yT the final output. Since marginalizing over the intermediate generations in Eq. 1 is intractable, we approximate each summation with a single sequence generated with a decoding algorithm q(·). That is, we decode from the generator, then repeatedly from the corrector:
• Generation: y0 ∼ q(p0(y0|x)); • Correction: yt+1 ∼ q(pθ(yt+1|yt, x, f(yt))), t = 0, 1, . . . , T − 1. The stopping time T is either fixed, or when a target value is obtained (if v(y) is available).
3 EXPERIMENTS
We evaluate SELF-CORRECTION on a diversity of tasks: mathematical program synthesis, in which generations are strictly correct or incorrect, and generators typically have low performance; lexically-constrained generation, which allows for partial credit, and generators usually give partially-correct solutions (e.g. matching 3 out of 5 constraints); and toxicity control, where ‘correctness’ is more loosely defined, and the output space is much more open-ended. Our experiments are organized to study three settings:
1. Using self-correctors to improve upon generators (§3.1,3.2,3.3). 2. Correcting generators that are much larger than the corrector (§3.4). 3. Leveraging explicit feedback during training and inference (§3.5).
Next, we describe the self-correction setup and baselines for each task, along with their results. ∗
3.1 MATHEMATICAL PROGRAM SYNTHESIS
First, we consider mathematical program synthesis (Austin et al., 2021; Mishra et al., 2022). Given a natural language problem specification x, the task is to generate a program y that upon execution returns the correct answer to x. The task is challenging as it draws on language understanding, multiple-step mathematical problem solving (e.g. identifying a solution strategy, decomposing a problem), and leveraging symbolic tools (e.g. built-in operations, variables). Furthermore, the task demands a high level of precision, e.g. a single misplaced operation makes the program incorrect.
Experimental setup. As the corrector we use GPT-Neo 1.3B (Black et al., 2021), an open-source autoregressive language model. GPT-Neo is pre-trained on language and code (Gao et al., 2021), and hence is widely used for code-related generation (e.g. Chen et al. (2021); Ni et al. (2022); Mishra et al. (2022)). We consider two settings for the initial generator: (1) a separate fine-tuned instance of GPT-Neo 1.3B, and (2) few-shot prompted GPT-3 (Brown et al., 2020). For GPT-3, we evaluate the davinci and text-davinci-002 engines, representative of large (≈ 175B∗) generators that are state-of-the-art in related tasks (Wei et al., 2022). See the Appendix for additional details.
∗Code will be available at www.github.com/wellecks/self_correction. ∗Estimated size of davinci (https://blog.eleuther.ai/gpt3-model-sizes). Further details not available.
Self-correction setup. As the value function we use correctness, which is 1 when the program y executes and outputs the ground-truth answer and 0 otherwise. Our main experiments do not use explicit feedback, i.e. f(y) = ∅. At inference time, we study two settings for the corrector: (1) applying k corrections and selecting the final generation, (2) an oracle setting that only corrects a draft if the draft is incorrect. We use greedy decoding for the generator and corrector, and k = 1.
Datasets. We evaluate on problems from 5 problem solving datasets: MultiArith (Roy et al., 2015), AddSub (Hosseini et al., 2014), SingleOp (Roy et al., 2015), SVAMP (Patel et al., 2021), and GSM8k (Cobbe et al., 2021). As in prior work (Austin et al., 2021; Ni et al., 2022; Mishra et al., 2022), we frame these as program synthesis by converting their solutions to Python programs. We separate our experiments into three increasingly difficult settings:
1. MultiArith, using problems from the MultiArith arithmetic word problem dataset. 2. Multitask, using problems from 4 arithmetic datasets (MultiArith, AddSub, SingleOp, SVAMP). 3. GSM, using problems from the challenging GSM8k dataset.
For the MultiArith and Multitask settings, we make train/valid/test splits using 60/20/20% of the respective datasets. Similar to Ni et al. (2022), for the GSM setting we use the official GSM8k test split, and create a validation split using 20% of the training set. Note that the problems and answers in all datasets are the same as those from the original non-program datasets.
Baselines. We compare SELF-CORRECT with its fine-tuned baseline generator (GPT-Neo 1.3B) in all three settings. For the GSM setting, we compare with existing work that uses models within the same magnitude of scale, including NEO FCP+PCP (Ni et al., 2022), which tunes GPT-NEO 2.7B with additional self-sampled programs, and their fine-tuned GPT-NEO 2.7B baseline. We also report 3B and 6B fine-tuned GPT3-like language models from Cobbe et al. (2021), which were trained on the non-program version of GSM8k. We evaluate larger models later in (§3.4).
Results. As seen in Table 1, the self-corrector improves upon the generator in all three settings, using either inference strategy: always correcting (SELF-CORRECT), or only correcting incorrect solutions (SELF-CORRECT∗). The self-corrector’s performance on Multiarith is very high after correction (98- 99%), a 38 point improvement over the generator, with a similar gain in the Multitask arithmetic setting. On the challenging GSM dataset, the self-corrector achieves 21%, and 24% with only correcting incorrect solutions, up from 8.57% for the generator. Notably, this is higher than the larger 2.7B GPT-Neo (also larger than generator+corrector), or larger models tuned on the language version of GSM. The results show that self-corrective learning can improve task performance via training a corrector. Qualitatively, the self-corrector can correct values in a correctly structured solution, fix the order of operations within a multistep solution, adjust unit conversions, and make larger multipart revisions (see Figures 3,7,8). Notably, these are learned automatically.
3.2 LEXICALLY CONSTRAINED GENERATION
Next, we consider lexically constrained generation. Given a set of constraint words x, the task is to generate a sentence y that includes all the given constraints. Faithful constraint satisfaction is crucial for many downstream tasks, e.g., those that require converting information to text (McKeown, 1985).
Datasets and Metrics. We experiment on COMMONGEN (Lin et al., 2020) and E2E (Novikova et al., 2017). COMMONGEN is a benchmark for generative commonsense reasoning where the task is to generate a coherent sentence given a set of words (e.g., dog, catch). E2E involves converting structured inputs into natural language. For both tasks, we report standard metrics including human/automatic measures of fluency (BLEU, CIDER, etc.) as well as constraint coverage. We collect human measures of fluency on Amazon Mechanical Turk; see the Appendix for details.
Setup. We parameterize the base generator with GPT-2 Radford et al. (2019) (large-size for COMMONGEN and medium-size for E2E). We fine-tuned the generator for each task. As the value function for self-corrective learning we use coverage, i.e. the percentage of constraints that are present in the output. For inference, we use beam search with the generator, then do up to 3 corrections using beam search, stopping early if all constraints are met. See the Appendix for additional details.
Results. Table 2 shows the evaluation results. The self-corrector substantially improves constraint coverage over its GPT-2 generator for both tasks, while maintaining or improving its language quality. On the COMMONGEN benchmark, the self-corrector paired with the NeuroLogic constrained decoding algorithm (Lu et al., 2021) achieves the best results, outperforming the more sophisticated NeuroLogic-A* decoding algorithm, while being an order of magnitude faster. Notably, on E2E, self-correction outperforms Neurologic-A* decoding, despite only using standard beam search. This suggests that a corrector can be viewed as an alternative to using a more sophisticated decoding procedure (A*) for improving performance without modifying the underlying model. See Figure 9.
3.3 TOXICITY REDUCTION
Next, we consider the task of toxicity reduction (Gehman et al., 2020; Liu et al., 2021). Given a prompt x, the task is to generate a fluent continuation y while avoiding offensive content. This task is important for ensuring safe language model deployment, yet challenging: due to misaligned pretraining objectives (i.e. modeling internet text vs. non-toxic text), language models are suscepti-
SELF-CORRECT 0.171 0.026 11.81 0.80 0.83
Table 3: Toxicity reduction. GPT-2 is the base generator.
ble to generating toxic completions, even when prompted with seemingly innocuous text (Gehman et al., 2020). Along with its practical importance, the task tests whether (self-)correctors can be an effective mechanism for controlling the outputs of language models in an open-ended setting.
Datasets and Metrics. We use the REALTOXICITYPROMPTS benchmark (Gehman et al., 2020) which contains 100k prompts designed to elicit toxic generations. Following the experimental setup of Liu et al. (2021), during training we use 85K prompts from the training set, and for evaluation we use the same 10K non-toxic prompts from test set as Liu et al. (2021). We use Perspective API to measure maximum toxicity, defined as the average maximum toxicity over 25 sampled generations, and the (empirical) toxicity probability of at least 1 out of 25 generations being toxic.
Baselines. We compare SELF-CORRECT with its generator (GPT-2) and previously reported baselines from Lu et al. (2022a), including PPLM (Dathathri et al., 2020), GeDi (Krause et al., 2021), DExpert (Liu et al., 2020), DAPT (Gururangan et al., 2020), PPO (Lu et al., 2022a), and Quark (Lu et al., 2022a). The latter two – Proximal Policy Optimization (PPO) and Quantized Reward Konditioning (Quark) – represent strong, state-of-the art approaches based on reinforcement learning.
Setup. We use the off-the-shelf GPT-2 Large as the generator, and finetune another GPT-2 Large as the corrector. During inference, we use nucleus sampling with p = 0.9 to generate 25 samples for all baselines. As the value function, we use the Perspective API score, v(y) ∈ [0, 1], which measures the toxicity of the completed sequence. We do up to three corrections with the corrector model.
Results. Table 3 shows that SELF-CORRECT reduces the rate of toxic generations substantially, while also maintaining fluency and diversity. SELF-CORRECT outperforms all baselines. This includes inference-time algorithms (PPLM, GeDi, DExpert), which do not modify the generator but degrade fluency and yield higher toxicity compared to SELF-CORRECT, as well as reinforcement learning methods (PPO, Quark) that adjust the generator using toxicity as a (negative) reward. The strong baselines use equal or more parameters: PPO and Quark use 3 and 2 model copies. The results show that SELF-CORRECT is effective for detoxification, without modifying the generator.
3.4 CHANGING MODULES – CORRECTING GPT-3
Next, we show that a self-corrector can improve the outputs of a generator that is much larger than the corrector. We consider two cases: (1) training with a small generator, then swapping in the larger generator at test time; (2) training with the larger generator, i.e. using the large generator to initialize the datapool for self-corrective learning, then using the large generator at test time.
Toxicity. We evaluate case (1) for reducing the toxicity of a large generator (GPT-2 XL, GPT-3). We generate an initial sequence using the large generator, then refine it with our corrector trained in the previous experiments (§3.3). Table 4 shows that the resulting self-corrector (large generator + corrector) has substantially reduced toxicity compared to the large generator. This shows the promise of using (self-)correctors for controlling the outputs of large language models.
Math program synthesis. Table 4 shows results for math. Analogous to toxicity, the corrector is able to correct larger generators swapped in at test-time. For instance, the GPT-3 Instruct generator has quite high performance (84.90 Multitask, 36.80 GSM), which improves to 90.90 and 45.00,
respectively, by adding in a corrector. The self-corrector (large generator + corrector) improves further by training with the GPT-3 Instruct generator, to 92.75 and 45.92, respectively.
3.5 LEVERAGING EXPLICIT FEEDBACK
Next, we demonstrate SELF-CORRECT’s capacity to incorporate explicit natural language feedback. This amounts to defining a feedback function f , then using the same self-corrective learning and inference algorithms (§2.1) as in our preceding experiments (in those experiments, f returned ∅). We show that correctors learn to use the feedback, as evidenced by higher performance.
Toxicity. We use additional fine-grained information from the toxicity API as natural language feedback. Specifically, besides the overall toxicity score, Perspective API also provides scores for fine-grained attributes of toxicity (e.g. identity attack, profanity, flirtation, etc.). At training time, we compare the attribute scores from a hypothesis and its selected correction, and use the attribute with the largest decrease as natural language feedback (e.g. "decrease toxicity in profanity"). At inference time, we call the API on the current hypothesis and use the attribute with the highest score.
Lexical constraints. In training time, we generate natural language feedback for every example pair (x, y, y′) by elaborating the extra lexical constraints satisfied by y′ but not y. e.g. “adding constraint word: read”. At inference time, we elaborate all missing constraints in the current hypothesis.
Math program synthesis. Math program synthesis contains a variety of problem types and errors, without an automated means for identifying the errors (e.g. an API). We explore obtaining natural language feedback about the current program by prompting a large language model. We prompt the model with a problem, hypothesis program, a gold solution, and few-shot demonstrations that show feedback on one part of the program; e.g. In the initial guess, 3 should be subtracted. When the program is correct, the feedback is Correct. At inference time, we also use feedback from the language model. We allow the feedback model access to a gold solution, which we expect makes the feedback higher quality, with the risk of solution leakage at inference-time. Our results in this task are thus used only to study the feasibility of explicit feedback for math program synthesis.
Setup. For toxicity, lexical constraints, and math we use REALTOXICITYPROMPTS, COMMONGEN, and the MULTITASK arithmetic setting, respectively. We follow the setup of each task’s previous experiments (§3.3,§3.2,§3.1), except for math we use 5 correction iterations (previously 1). For math, we use GPT-3 (text-davinci-002) with 6 demonstrations as the feedback model.
Results. Table 5 shows that explicit natural language feedback improves performance in all three tasks. For toxicity, this means that providing fine-grained attributes (e.g. identity attack, profanity,
Figure 5: Math: multiple corrections.
Ablation Math COMMONGEN
SELF-CORRECT 78.24 94.55 ✗ proportional sampling 77.25 93.49 ✗ value pairing 62.35 91.76
Table 6: Effect of pairing and proportional sampling.
Exploration Multiarith Multitask GSM8k
✗ 89.20 73.49 17.60 ✓ 99.17 78.24 23.96
Table 7: Effect of exploration on program synthesis.
etc.) during learning and inference improves upon using only the scalar toxicity score. Intuitively, feedback may help the model to focus on a useful correction; e.g., see Figure 6.
3.6 ADDITIONAL ABLATIONS AND ANALYSIS
Effect of multiple corrections. Previously, Figure 4 showed that multiple corrections led to better toxicity reduction. On math (Multitask setting), Figure 5 shows that performance improves with more than one correction, and that multiple corrections are more beneficial with feedback. Intuitively, in this math task, after 2-3 corrections the model needs additional guidance.
Effect of pairing and proportional sampling. Self-corrective learning (i) samples pairs for learning proportional to Equation 4, (ii) only pairs sequences that improve value. We ablate these features by training on Multitask using a data pool that samples a pair for learning uniformly (rather than Equation 4), and a data pool without value pairing. Table 6 shows that both improve performance.
Effect of exploration. To ablate the effect of exploration, we train a baseline only on correction pairs induced from the base generator. Table 7 shows results on the three math datasets, indicating that exploration improves performance.
4 RELATED WORK
Self-Correction relates to work modeling text edits including supervised Wikipedia edits (Reid & Neubig, 2022; Faltings et al., 2021; Schick et al., 2022), unsupervised perturbations (Miao et al., 2019; Liu et al., 2020), training on human-written critiques (Saunders et al., 2022), or refining continuous variables (Lee et al., 2020; Li et al., 2022; Qin et al., 2022). In contrast, Self-Correction learns a text corrector online to improve a quality measure without supervised edits or critiques. Recently, Scheurer et al. (2022) use natural language feedback to improve generations. Denoising sequences is a common pretraining objective (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020), while self-correction ‘denoises’ generations to improve a scalar quality measure. Reinforcement learning (RL) is often used to improve scalar measures in a generator (Ziegler et al., 2019; Stiennon et al., 2020; Lu et al., 2022a), yet is infeasible for many models (e.g. those accessed by API), and uses only scalar feedback. Moreover, RL-tuned generators can be used within Self-Correction. Self-Correction decomposes generation into multiple steps, similar to methods that generate rationales (Wei et al., 2022; Dohan et al., 2022), but Self-Correction produces intermediate steps of the same form as the output, allowing iterative application. Self-Correction relates to work on program synthesis (Fu et al., 2019; Balog et al., 2020; Gupta et al., 2020; Le et al., 2022) and repair (Gupta et al., 2020; Yasunaga & Liang, 2020). Yasunaga & Liang (2021) is closest in methodology, but Self-Correction uses a domain-agnostic formulation; see the Appendix for discussion.
5 CONCLUSION
We introduced self-correctors, a class of models that decompose generation into initial generation and correction steps. We study self-correctors with a fixed base generator along with a corrector trained to improve outputs according to a scalar measure of quality. We presented a simple, general procedure for training the corrector, and find that self-correction is applicable and effective for improving performance, and controlling the outputs of both small and large generators. Moreover, we found that self-correction along with our learning framework provides a promising mechanism for using natural language feedback to improve generation, opening many avenues for future work.
A RELATED WORK
Self-correction provides a flexible framework for improving the performance of off-the-shelf and fine-tuned language models on a wide range of tasks by decomposing generation into a base generator and a corrector. Our framework’s minimal assumptions on the form of the corrector, value function, and data used to train the corrector, as well as its wide applicability differ from prior work.
Learning to fix code. Our work relates to two streams of research in the code domain. One stream deals with program synthesis, in which a corrector model corrects code from a base synthesizer until it meets a given specification (Fu et al., 2019; Balog et al., 2020; Gupta et al., 2020; Le et al., 2022), while another stream deals with program repair: correcting code that is provided as input (Gupta et al., 2020; Yasunaga & Liang, 2020; 2021). Recently, Le et al. (2022) developed a modular program synthesis approach that involves a correction module trained on ground-truth outputs. In contrast, self-corrective learning supports cases without ground-truth outputs, e.g. toxicity.
Closest to our methodology is Yasunaga & Liang (2021). Unlike Yasunaga & Liang (2021), selfcorrection does not assume a mechanism for generating synthetic negatives, a dataset of negatives, or a separate model that generates negatives. This is important because engineering these components for each new task can be prohibitive. Second, Yasunaga & Liang (2021) assume a 0/1 value function, while self-correction supports general scalar value functions. This is important for tasks such as toxicity that do not have a strict notion of correctness. Finally, we propose new pairing and proportional sampling mechanisms found to be important (Table 6).
Iterative text edits. Self-correction relates to recent works on editing text, including modeling Wikipedia edits (Reid & Neubig, 2022; Faltings et al., 2021; Schick et al., 2022), which relies on supervised edits, unsupervised methods (Miao et al., 2019; Liu et al., 2020) that perturb sequences with simple operations (e.g. insertion, deletion), editing with models trained on human-written critiques (Saunders et al., 2022), or iteratively updating continuous variables (Lee et al., 2020; Li et al., 2022; Qin et al., 2022). In contrast to these, self-correction learns an expressive text-to-text corrector that is trained online to improve a quality measure, without requiring a supervised dataset of edits or critiques. Recently, Scheurer et al. (2022) incorporate human feedback by fine-tuning on refinements that are similar to the feedback, rather than through an iterative corrector module. Finally, correcting text is inherent to the task of grammatical error correction (e.g. Lichtarge et al. (2019); Yasunaga et al. (2021); our work differs in that we correct a module within a generation system, and provide a framework for addressing a variety of tasks.
Denoising and reinforcement learning. Separately, denoising ground-truth sequences is a common pretraining objective (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020), while selfcorrection ‘denoises’ generations to improve a scalar quality measure. Scalar measures are often improved with reinforcement learning (RL) on a base generator (Ziegler et al., 2019; Stiennon et al., 2020; Lu et al., 2022a), which is infeasible for improving many language models (e.g. those accessed through an API), and uses only scalar feedback. Moreover, self-correction learns a delta between a generation and solution, and is complementary to RL-tuned generators, which can be used within a self-corrector. Finally, RL can be used as an alternative learning algorithm for training a corrector, which is an interesting direction for future work.
Modular generation. Self-correction decomposes generation into multiple steps, and is thus part of the general class of methods that decompose generation into a ‘cascade’ of modules (Dohan et al., 2022). Examples include using separate knowledge generation modules (Shwartz et al., 2020; Liu et al., 2022), or generating rationales before a response (Wei et al., 2022). Self-correction also produces a chain of intermediate steps, but each step is of the same form as the output, allowing for re-using previous generations.
B ADDITIONAL EXPERIMENTAL DETAILS
B.1 CROSS-EXPERIMENT DETAILS
In all of our experiments we use an off-the-shelf embedding similarity function from SentenceTransformers (Reimers & Gurevych, 2019): sentence-transformers/all-MiniLM-L6-v2.
B.2 MATHEMATICAL PROGRAM SYNTHESIS
We fine-tune a separate instance of GPT-Neo 1.3B as an initial generator, using the Huggingface library with default hyperparameters, except for evaluation steps, which we set to a small number to ensure a strong checkpoint is selected for each dataset. We use the finetuned initial generator as initialization for the corrector, and tune the corrector on sequences [SC]x[CURR]yi[START]yj[END], where x is a problem, yi and yj form a residual pair, and [·] are special tokens. The loss is on tokens after [START].
Feedback. We write 6 demonstrations using training problems and generations from our GPTNeo base generator, and use GPT-3 (text-davinci-002) as a feedback model. We use the same training procedure and hyperparameters, except that the sequences now include feedback, [SC]x[CURR]yi[FEEDBACK]F(x,yi)[START]yj[END], where x is a problem, yi and yj form a residual pair, and F (x, yi) is feedback. We include loss on tokens after [FEEDBACK].
B.3 LEXICALLY-CONSTRAINED GENERATION
Hyper-parameters. Table 8 and Table 9 show hyperparameters for CommonGen and E2E.
Human Evaluation. We evaluate fluency of generations in E2E task using human annotators on Amazon Mechanical Turk (AMT). We randomly sampled 100 instances, along with generations of different baselines and self-corrections. For each instance, we ask 3 annotators to evaluate the fluency of generations on a 3-point Likert scale. We aggregate annotations from 3 annotators using majority vote. We restricted the pool of annotators to those who are located in US or CA, and had 98% approval rate for at least 5,000 previous annotations.
Hyperparameter Assignment
Predictor GPT-2Large steps 6000 batch size 128 optimizer Adam learning rate 1.e−5 decoding alg. beam search (k=5)
Table 8: Hyperparameters for COMMONGEN.
Hyperparameter Assignment
Predictor GPT-2Medium steps 10000 batch size 100 optimizer Adam learning rate 1.e−5 decoding alg. beam search (k=5)
Table 9: Hyperparameters for E2E.
C ADDITIONAL RESULTS
D QUALITATIVE EXAMPLES | 1. What is the main contribution of the paper regarding sequence generation?
2. What are the strengths and weaknesses of the proposed self-correction approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the concerns regarding the evaluation setup and comparison with other works?
5. Is there any question about the similarity function definition in Equation 4? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents self-correction approach for sequence generation. Specifically, given a sequence decoded by the base generator, they train a corrector to generate another sequence, with the goal of achieving a better score than the input sequence. They design the self-corrective learning algorithm to train the corrector, where they select sequence pairs for training that: (1) the target sequence improves the score; and (2) the target sequence stays relatively similar to the input sequence. They also evaluate a setting where the corrector can leverage additional natural language feedback for correction. They evaluate their approach on 3 tasks: mathematical program synthesis, lexically-constrained generation, and toxicity control. The results demonstrate that adding a corrector improves the results over the base generator.
Strengths And Weaknesses
Strengths:
Iteratively decoding is a promising direction to improve the solution quality for a wide range of tasks, including sequence generation.
The evaluation covers different domains and shows good empirical results.
Weaknesses:
Learning input correction is not a novel approach. This work completely ignores a long line of research on learning to repair in the code domain. For example, there are existing works that learn a neural debugger to repair the prediction of a base program synthesizer [e.g., 1, 2, 3, 4], and propose neural networks for stand-alone program repair tasks (among others, [5, 6] are closely related in terms of the approach design). The authors should provide a proper discussion of related works in this space.
The evaluation setup is not convincing. The base generators utilize a small beam size, and sometimes with greedy decoding. A fair comparison is to increase the number of samples from the base generator, and see whether the corrector improves the performance with the same number of samples in total.
Also, it is unclear whether the approach improves over the SOTA results. For example, on GSM dataset and some other math benchmarks, the SOTA approach is self-consistency [7], where the best result on GSM is 78%. The authors should evaluate their self-correction method upon better base generators and compare the performance.
In Equation 4, it is unclear how the similarity function is defined.
[1] Gupta et al., Synthesize, Execute and Debug: Learning to Repair for Neural Program Synthesis, NeurIPS 2020. [2] Balog et al., Neural Program Synthesis with a Differentiable Fixer. [3] Fu et al., Coda: An End-to-End Neural Program Decompiler, NeurIPS 2019. [4] Le, Wang et al., CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning, NeurIPS 2022. [5] Yasunaga and Liang, Graph-based, Self-Supervised Program Repair from Diagnostic Feedback, ICML 2020. [6] Yasunaga and Liang, Break-It-Fix-It: Unsupervised Learning for Program Repair, ICML 2021. [7] Wang et al., Self-consistency improves chain of thought reasoning in language models.
Clarity, Quality, Novelty And Reproducibility
The writing is generally clear, except that the similar function design is unclear.
The presented self-correction approach is sound and empirically effective. However, it is not a novel approach and has been well-studied in neural program synthesis domain, and the authors did not properly discuss the related work.
The authors promise to release the source code upon paper acceptance. |
ICLR | Title
Generating Sequences by Learning to Self-Correct
Abstract
Sequence generation applications require satisfying semantic constraints, such as ensuring that programs are correct, using certain keywords, or avoiding undesirable content. Language models, whether fine-tuned or prompted with few-shot demonstrations, frequently violate these constraints, and lack a mechanism to iteratively revise their outputs. Moreover, some powerful language models are of extreme scale or inaccessible, making it inefficient, if not infeasible, to update their parameters for task-specific adaptation. We present SELF-CORRECTION, an approach that decouples an imperfect base generator (an off-the-shelf language model or supervised sequence-to-sequence model) from a separate corrector that learns to iteratively correct imperfect generations. To train the corrector, we propose an online training procedure that can use either scalar or natural language feedback on intermediate imperfect generations. We show that SELFCORRECTION improves upon the base generator in three diverse generation tasks– mathematical program synthesis, lexically-constrained generation, and toxicity control– even when the corrector is much smaller than the base generator.
1 INTRODUCTION
The standard practice for natural language generation tasks is inherently single-pass: applying a decoding procedure to either a few-shot prompted language model or one tuned for a given task, then considering the generation as “finished” (e.g. Radford et al. (2019); Brown et al. (2020); Chen et al. (2021)). Powerful generation models often meet most of the task requirements, yet miss a few (e.g., omitting a subset of keywords), or generate incorrect hypotheses that nevertheless provide useful structure (e.g., a correct problem solving strategy with a missing step). However, after generating even a slightly sub-optimal sequence, the single-pass paradigm requires models to “start from scratch”, effectively discarding work already done. A more natural, intuitive approach is leveraging the generation as a useful starting point to refine into a higher quality output.
To formalize this intuition, we introduce Self-Correction for Sequence Generation. Figure 1 demonstrates its central principle: a generation model is re-framed as a base generator, which produces a reasonable initial hypothesis but does not need to solve the task in one pass, and a second module–the corrector–trained to make up the difference between the hypothesis and an optimal solution. Neither the generator nor the corrector must solve the full task in one pass, and the corrector can be applied multiple times to iteratively improve the output (§3.6). We propose a simple, general procedure for training the corrector (Figure 2) by pairing generator outputs with carefully selected targets. The result is a system which self-corrects, producing outputs through multiple generation passes and breaking the task into steps that can be solved by dedicated and efficient sub-systems.
Self-Correction builds on past work for correction in the code and text (e.g. Yasunaga et al. (2021); Faltings et al. (2021)) domains, but provides a unified formalism with minimal assumptions about
∗First authors, contributed equally. †Second authors, contributed equally.
data and feedback, which applies generally to diverse tasks. A corrector model improves the base generator on 3 such tasks in our experiments: mathematical program synthesis (§3.1), lexically constrained generation (§3.2), and toxicity reduction (§3.3). The trained corrector model even transfers to a larger generator with similar performance to training from scratch (§3.4). Finally, we explore introducing a third module to the Self-Correction system (§3.5)–explicitly using natural language feedback to guide corrections–with promising results. Self-Correction is an exciting path to build on the generations of strong models, with efficient, effective, and transferable corrector networks.
2 SELF-CORRECTING SEQUENCE GENERATORS
A typical autoregressive text generator (e.g. GPT-3 (Brown et al., 2020)) maps an input prompt to a distribution over outputs using a single parameterized module (e.g. a large transformer), p0(y|x). We explore an alternative that decomposes into two modules, a base generator, and a corrector,
p(y|x) = ∑ y0 p0(y0|x)︸ ︷︷ ︸ generator pθ(y|y0, x)︸ ︷︷ ︸ corrector
(1)
where the generator provides an initial hypothesis that is refined by the corrector. In practice, the corrector can be applied multiple times, p(yT |x) = ∑ y0 ∑ y1 · · · ∑ yT−1 p0(y0|x) ∏
t pθ(yt+1|yt, x). Since a model of this form can both generate and correct its generations, we call it a Self-Corrector.
Self-correctors have several unique properties compared to typical generators. First, a self-corrector decouples generation and correction, allowing us to freely parameterize each module – for instance, by prompting a single language model or using two different language models. In this paper, we develop a framework to train a separate corrector model (§2.1). We find that the resulting selfcorrector improves upon the generator alone (§3), even when the corrector is much smaller (§3.4).
Second, since the generator and the corrector are separated, we can keep the generator as a generalpurpose language model and train the corrector with different objectives for different task requirements. In §2.1, we propose a training algorithm for the corrector that is dedicated to improving generations, where the improvement can be in any aspect, measured by scalar values.
Third, the corrector can receive explicit feedback about intermediate generations to guide subsequent generations. Formally, p(y|x) = ∑ y0
p0(y0|x)pθ(y|y0, x, f(y0)), where f is the feedback. The feedback can be of many forms, e.g. a sentence, a compiler trace, etc. In contrast, a typical generator that generates in a single pass does not leverage feedback on its own generation. In this paper, we show that the corrector can learn to exploit explicit natural language feedback to achieve better performance (§3.5). Next, we describe our training framework of the corrector.
2.1 LEARNING A CORRECTOR
Our goal is to have the generator generate an initial hypothesis, then improve the hypothesis with the corrector (Eq. 1). We train the corrector to improve the quality of a hypothesis, while staying as close as possible to the original hypothesis. Here, quality is measured with a scalar value function v(y) which is accessible at training time (e.g. 0/1 indicator of program correctness, a toxicity score).
Algorithm 1 Self-corrective learning input Generator p0, corrector pθ , prompts X , value v(·), feedback f(·)
Initialize datapool D by sampling from p0 ▷ Initialization: Eq. 2 for iteration ∈ {1, 2, . . .} do
Form value-improving pairs P from D ▷ Pairing: Eq. 3 for step in 1, 2, . . . ,M do
Sample a batch of value-improving pairs from P using Eq. 4 Compute the loss and update θ using gradient descent ▷ Learning for x ∈ X do Sample hypotheses y from datapool D Generate corrections y′ ∼ pθ(·|y, x, f(y)) Add all (x, y′, v(y′), f(y′)) to the datapool D ▷ Exploration: Eq. 5
Since direct supervision on how to improve hypotheses is not available, we design a new algorithm to train the corrector, which we refer to as self-corrective learning. The algorithm collects a pool of generations, pairs them and selects pairs of generation that increase in value and are nearby, then updates the corrector on these pairs. As training progresses, more generations are added to the pool using the current corrector. Algorithm 1 summarizes self-corrective learning, detailed below.
Initialization. Self-corrective learning begins with a generator p0(y0|x), a corrector pθ(y′|y, x) , a set of training prompts X , and a value function v : Y → R. Optionally, we can use additional feedback f : Y → F and learn pθ(y′|y, x, f(y)), where F is arbitrary. The algorithm initializes a datapool of (input, output, value, feedback) examples by using the generator to generate multiple outputs for each input. Formally,
Dx = {(x, y, v(y), f(y)) | for all y ∈ y1:N ∼ q(p0(·|x))}, D = ⋃ x∈X Dx, (2)
where y1:N denotes N outputs generated with decoding algorithm q (e.g. temperature sampling). When available, (x, y, v(y), f(y)) examples from another source (e.g. a dataset) can also be added.
Pairing. Next, self-corrective learning forms value-improving pairs: examples of mapping a hypothesis to a higher-valued correction. We use the datapool D to form a set of (input, hypothesis, correction) pairs. A pair is formed when an output has a higher value than another ∗:
Px = {(x, y, y′) | v(y) < v(y′) for all y, y′ ∈ Dx ×Dx}, P = ⋃ x∈X Px, (3)
Learning. Next, self-corrective learning selects (input, hypothesis, correction) pairs to update the corrector with. We sample an input, x ∼ U(X), then sample a (x, y, y′) pair proportional to its
∗We also store the value and feedback for y and y′ along with (x, y, y′), which we omit to reduce clutter.
improvement in value as well as the proximity between the hypothesis y and the correction y′:, P[(x, y, y′)|x] ∝ exp ( α · (v(y′)− v(y))︸ ︷︷ ︸
improvement
+β · s(y, y′)︸ ︷︷ ︸ proximity
) /Z(y), (4)
where s(y, y′) is a similarity function and Z(y) normalizes over the available corrections for y in Px. Increasing the hyperparameter α ∈ R≥0 puts more weight on targets that add more value, while increasing β ∈ R≥0 retains more similar targets. We update the corrector using the cross-entropy loss L(θ) = − log pθ(y′|y, x, f(y)) on batches sampled in this way. Exploration. During exploration, self-corrective learning adds new generations to the datapool by generating from the current corrector:
D′x = {(x, y′, v(y′), f(y′)) | for all y′ ∈ y′1:N ∼ q(pθ(·|y, x, f(y))}, D′ = ⋃ x∈X D′x (5)
and updating the datapool D ← D∪D′. The hypotheses y to correct can come from any source, e.g. newly sampled from the base generator, or from the datapool; we use the latter in our experiments.
Inference. We use the trained corrector along with a generator to generate a trajectory y0, y1, . . . , yT , and consider yT the final output. Since marginalizing over the intermediate generations in Eq. 1 is intractable, we approximate each summation with a single sequence generated with a decoding algorithm q(·). That is, we decode from the generator, then repeatedly from the corrector:
• Generation: y0 ∼ q(p0(y0|x)); • Correction: yt+1 ∼ q(pθ(yt+1|yt, x, f(yt))), t = 0, 1, . . . , T − 1. The stopping time T is either fixed, or when a target value is obtained (if v(y) is available).
3 EXPERIMENTS
We evaluate SELF-CORRECTION on a diversity of tasks: mathematical program synthesis, in which generations are strictly correct or incorrect, and generators typically have low performance; lexically-constrained generation, which allows for partial credit, and generators usually give partially-correct solutions (e.g. matching 3 out of 5 constraints); and toxicity control, where ‘correctness’ is more loosely defined, and the output space is much more open-ended. Our experiments are organized to study three settings:
1. Using self-correctors to improve upon generators (§3.1,3.2,3.3). 2. Correcting generators that are much larger than the corrector (§3.4). 3. Leveraging explicit feedback during training and inference (§3.5).
Next, we describe the self-correction setup and baselines for each task, along with their results. ∗
3.1 MATHEMATICAL PROGRAM SYNTHESIS
First, we consider mathematical program synthesis (Austin et al., 2021; Mishra et al., 2022). Given a natural language problem specification x, the task is to generate a program y that upon execution returns the correct answer to x. The task is challenging as it draws on language understanding, multiple-step mathematical problem solving (e.g. identifying a solution strategy, decomposing a problem), and leveraging symbolic tools (e.g. built-in operations, variables). Furthermore, the task demands a high level of precision, e.g. a single misplaced operation makes the program incorrect.
Experimental setup. As the corrector we use GPT-Neo 1.3B (Black et al., 2021), an open-source autoregressive language model. GPT-Neo is pre-trained on language and code (Gao et al., 2021), and hence is widely used for code-related generation (e.g. Chen et al. (2021); Ni et al. (2022); Mishra et al. (2022)). We consider two settings for the initial generator: (1) a separate fine-tuned instance of GPT-Neo 1.3B, and (2) few-shot prompted GPT-3 (Brown et al., 2020). For GPT-3, we evaluate the davinci and text-davinci-002 engines, representative of large (≈ 175B∗) generators that are state-of-the-art in related tasks (Wei et al., 2022). See the Appendix for additional details.
∗Code will be available at www.github.com/wellecks/self_correction. ∗Estimated size of davinci (https://blog.eleuther.ai/gpt3-model-sizes). Further details not available.
Self-correction setup. As the value function we use correctness, which is 1 when the program y executes and outputs the ground-truth answer and 0 otherwise. Our main experiments do not use explicit feedback, i.e. f(y) = ∅. At inference time, we study two settings for the corrector: (1) applying k corrections and selecting the final generation, (2) an oracle setting that only corrects a draft if the draft is incorrect. We use greedy decoding for the generator and corrector, and k = 1.
Datasets. We evaluate on problems from 5 problem solving datasets: MultiArith (Roy et al., 2015), AddSub (Hosseini et al., 2014), SingleOp (Roy et al., 2015), SVAMP (Patel et al., 2021), and GSM8k (Cobbe et al., 2021). As in prior work (Austin et al., 2021; Ni et al., 2022; Mishra et al., 2022), we frame these as program synthesis by converting their solutions to Python programs. We separate our experiments into three increasingly difficult settings:
1. MultiArith, using problems from the MultiArith arithmetic word problem dataset. 2. Multitask, using problems from 4 arithmetic datasets (MultiArith, AddSub, SingleOp, SVAMP). 3. GSM, using problems from the challenging GSM8k dataset.
For the MultiArith and Multitask settings, we make train/valid/test splits using 60/20/20% of the respective datasets. Similar to Ni et al. (2022), for the GSM setting we use the official GSM8k test split, and create a validation split using 20% of the training set. Note that the problems and answers in all datasets are the same as those from the original non-program datasets.
Baselines. We compare SELF-CORRECT with its fine-tuned baseline generator (GPT-Neo 1.3B) in all three settings. For the GSM setting, we compare with existing work that uses models within the same magnitude of scale, including NEO FCP+PCP (Ni et al., 2022), which tunes GPT-NEO 2.7B with additional self-sampled programs, and their fine-tuned GPT-NEO 2.7B baseline. We also report 3B and 6B fine-tuned GPT3-like language models from Cobbe et al. (2021), which were trained on the non-program version of GSM8k. We evaluate larger models later in (§3.4).
Results. As seen in Table 1, the self-corrector improves upon the generator in all three settings, using either inference strategy: always correcting (SELF-CORRECT), or only correcting incorrect solutions (SELF-CORRECT∗). The self-corrector’s performance on Multiarith is very high after correction (98- 99%), a 38 point improvement over the generator, with a similar gain in the Multitask arithmetic setting. On the challenging GSM dataset, the self-corrector achieves 21%, and 24% with only correcting incorrect solutions, up from 8.57% for the generator. Notably, this is higher than the larger 2.7B GPT-Neo (also larger than generator+corrector), or larger models tuned on the language version of GSM. The results show that self-corrective learning can improve task performance via training a corrector. Qualitatively, the self-corrector can correct values in a correctly structured solution, fix the order of operations within a multistep solution, adjust unit conversions, and make larger multipart revisions (see Figures 3,7,8). Notably, these are learned automatically.
3.2 LEXICALLY CONSTRAINED GENERATION
Next, we consider lexically constrained generation. Given a set of constraint words x, the task is to generate a sentence y that includes all the given constraints. Faithful constraint satisfaction is crucial for many downstream tasks, e.g., those that require converting information to text (McKeown, 1985).
Datasets and Metrics. We experiment on COMMONGEN (Lin et al., 2020) and E2E (Novikova et al., 2017). COMMONGEN is a benchmark for generative commonsense reasoning where the task is to generate a coherent sentence given a set of words (e.g., dog, catch). E2E involves converting structured inputs into natural language. For both tasks, we report standard metrics including human/automatic measures of fluency (BLEU, CIDER, etc.) as well as constraint coverage. We collect human measures of fluency on Amazon Mechanical Turk; see the Appendix for details.
Setup. We parameterize the base generator with GPT-2 Radford et al. (2019) (large-size for COMMONGEN and medium-size for E2E). We fine-tuned the generator for each task. As the value function for self-corrective learning we use coverage, i.e. the percentage of constraints that are present in the output. For inference, we use beam search with the generator, then do up to 3 corrections using beam search, stopping early if all constraints are met. See the Appendix for additional details.
Results. Table 2 shows the evaluation results. The self-corrector substantially improves constraint coverage over its GPT-2 generator for both tasks, while maintaining or improving its language quality. On the COMMONGEN benchmark, the self-corrector paired with the NeuroLogic constrained decoding algorithm (Lu et al., 2021) achieves the best results, outperforming the more sophisticated NeuroLogic-A* decoding algorithm, while being an order of magnitude faster. Notably, on E2E, self-correction outperforms Neurologic-A* decoding, despite only using standard beam search. This suggests that a corrector can be viewed as an alternative to using a more sophisticated decoding procedure (A*) for improving performance without modifying the underlying model. See Figure 9.
3.3 TOXICITY REDUCTION
Next, we consider the task of toxicity reduction (Gehman et al., 2020; Liu et al., 2021). Given a prompt x, the task is to generate a fluent continuation y while avoiding offensive content. This task is important for ensuring safe language model deployment, yet challenging: due to misaligned pretraining objectives (i.e. modeling internet text vs. non-toxic text), language models are suscepti-
SELF-CORRECT 0.171 0.026 11.81 0.80 0.83
Table 3: Toxicity reduction. GPT-2 is the base generator.
ble to generating toxic completions, even when prompted with seemingly innocuous text (Gehman et al., 2020). Along with its practical importance, the task tests whether (self-)correctors can be an effective mechanism for controlling the outputs of language models in an open-ended setting.
Datasets and Metrics. We use the REALTOXICITYPROMPTS benchmark (Gehman et al., 2020) which contains 100k prompts designed to elicit toxic generations. Following the experimental setup of Liu et al. (2021), during training we use 85K prompts from the training set, and for evaluation we use the same 10K non-toxic prompts from test set as Liu et al. (2021). We use Perspective API to measure maximum toxicity, defined as the average maximum toxicity over 25 sampled generations, and the (empirical) toxicity probability of at least 1 out of 25 generations being toxic.
Baselines. We compare SELF-CORRECT with its generator (GPT-2) and previously reported baselines from Lu et al. (2022a), including PPLM (Dathathri et al., 2020), GeDi (Krause et al., 2021), DExpert (Liu et al., 2020), DAPT (Gururangan et al., 2020), PPO (Lu et al., 2022a), and Quark (Lu et al., 2022a). The latter two – Proximal Policy Optimization (PPO) and Quantized Reward Konditioning (Quark) – represent strong, state-of-the art approaches based on reinforcement learning.
Setup. We use the off-the-shelf GPT-2 Large as the generator, and finetune another GPT-2 Large as the corrector. During inference, we use nucleus sampling with p = 0.9 to generate 25 samples for all baselines. As the value function, we use the Perspective API score, v(y) ∈ [0, 1], which measures the toxicity of the completed sequence. We do up to three corrections with the corrector model.
Results. Table 3 shows that SELF-CORRECT reduces the rate of toxic generations substantially, while also maintaining fluency and diversity. SELF-CORRECT outperforms all baselines. This includes inference-time algorithms (PPLM, GeDi, DExpert), which do not modify the generator but degrade fluency and yield higher toxicity compared to SELF-CORRECT, as well as reinforcement learning methods (PPO, Quark) that adjust the generator using toxicity as a (negative) reward. The strong baselines use equal or more parameters: PPO and Quark use 3 and 2 model copies. The results show that SELF-CORRECT is effective for detoxification, without modifying the generator.
3.4 CHANGING MODULES – CORRECTING GPT-3
Next, we show that a self-corrector can improve the outputs of a generator that is much larger than the corrector. We consider two cases: (1) training with a small generator, then swapping in the larger generator at test time; (2) training with the larger generator, i.e. using the large generator to initialize the datapool for self-corrective learning, then using the large generator at test time.
Toxicity. We evaluate case (1) for reducing the toxicity of a large generator (GPT-2 XL, GPT-3). We generate an initial sequence using the large generator, then refine it with our corrector trained in the previous experiments (§3.3). Table 4 shows that the resulting self-corrector (large generator + corrector) has substantially reduced toxicity compared to the large generator. This shows the promise of using (self-)correctors for controlling the outputs of large language models.
Math program synthesis. Table 4 shows results for math. Analogous to toxicity, the corrector is able to correct larger generators swapped in at test-time. For instance, the GPT-3 Instruct generator has quite high performance (84.90 Multitask, 36.80 GSM), which improves to 90.90 and 45.00,
respectively, by adding in a corrector. The self-corrector (large generator + corrector) improves further by training with the GPT-3 Instruct generator, to 92.75 and 45.92, respectively.
3.5 LEVERAGING EXPLICIT FEEDBACK
Next, we demonstrate SELF-CORRECT’s capacity to incorporate explicit natural language feedback. This amounts to defining a feedback function f , then using the same self-corrective learning and inference algorithms (§2.1) as in our preceding experiments (in those experiments, f returned ∅). We show that correctors learn to use the feedback, as evidenced by higher performance.
Toxicity. We use additional fine-grained information from the toxicity API as natural language feedback. Specifically, besides the overall toxicity score, Perspective API also provides scores for fine-grained attributes of toxicity (e.g. identity attack, profanity, flirtation, etc.). At training time, we compare the attribute scores from a hypothesis and its selected correction, and use the attribute with the largest decrease as natural language feedback (e.g. "decrease toxicity in profanity"). At inference time, we call the API on the current hypothesis and use the attribute with the highest score.
Lexical constraints. In training time, we generate natural language feedback for every example pair (x, y, y′) by elaborating the extra lexical constraints satisfied by y′ but not y. e.g. “adding constraint word: read”. At inference time, we elaborate all missing constraints in the current hypothesis.
Math program synthesis. Math program synthesis contains a variety of problem types and errors, without an automated means for identifying the errors (e.g. an API). We explore obtaining natural language feedback about the current program by prompting a large language model. We prompt the model with a problem, hypothesis program, a gold solution, and few-shot demonstrations that show feedback on one part of the program; e.g. In the initial guess, 3 should be subtracted. When the program is correct, the feedback is Correct. At inference time, we also use feedback from the language model. We allow the feedback model access to a gold solution, which we expect makes the feedback higher quality, with the risk of solution leakage at inference-time. Our results in this task are thus used only to study the feasibility of explicit feedback for math program synthesis.
Setup. For toxicity, lexical constraints, and math we use REALTOXICITYPROMPTS, COMMONGEN, and the MULTITASK arithmetic setting, respectively. We follow the setup of each task’s previous experiments (§3.3,§3.2,§3.1), except for math we use 5 correction iterations (previously 1). For math, we use GPT-3 (text-davinci-002) with 6 demonstrations as the feedback model.
Results. Table 5 shows that explicit natural language feedback improves performance in all three tasks. For toxicity, this means that providing fine-grained attributes (e.g. identity attack, profanity,
Figure 5: Math: multiple corrections.
Ablation Math COMMONGEN
SELF-CORRECT 78.24 94.55 ✗ proportional sampling 77.25 93.49 ✗ value pairing 62.35 91.76
Table 6: Effect of pairing and proportional sampling.
Exploration Multiarith Multitask GSM8k
✗ 89.20 73.49 17.60 ✓ 99.17 78.24 23.96
Table 7: Effect of exploration on program synthesis.
etc.) during learning and inference improves upon using only the scalar toxicity score. Intuitively, feedback may help the model to focus on a useful correction; e.g., see Figure 6.
3.6 ADDITIONAL ABLATIONS AND ANALYSIS
Effect of multiple corrections. Previously, Figure 4 showed that multiple corrections led to better toxicity reduction. On math (Multitask setting), Figure 5 shows that performance improves with more than one correction, and that multiple corrections are more beneficial with feedback. Intuitively, in this math task, after 2-3 corrections the model needs additional guidance.
Effect of pairing and proportional sampling. Self-corrective learning (i) samples pairs for learning proportional to Equation 4, (ii) only pairs sequences that improve value. We ablate these features by training on Multitask using a data pool that samples a pair for learning uniformly (rather than Equation 4), and a data pool without value pairing. Table 6 shows that both improve performance.
Effect of exploration. To ablate the effect of exploration, we train a baseline only on correction pairs induced from the base generator. Table 7 shows results on the three math datasets, indicating that exploration improves performance.
4 RELATED WORK
Self-Correction relates to work modeling text edits including supervised Wikipedia edits (Reid & Neubig, 2022; Faltings et al., 2021; Schick et al., 2022), unsupervised perturbations (Miao et al., 2019; Liu et al., 2020), training on human-written critiques (Saunders et al., 2022), or refining continuous variables (Lee et al., 2020; Li et al., 2022; Qin et al., 2022). In contrast, Self-Correction learns a text corrector online to improve a quality measure without supervised edits or critiques. Recently, Scheurer et al. (2022) use natural language feedback to improve generations. Denoising sequences is a common pretraining objective (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020), while self-correction ‘denoises’ generations to improve a scalar quality measure. Reinforcement learning (RL) is often used to improve scalar measures in a generator (Ziegler et al., 2019; Stiennon et al., 2020; Lu et al., 2022a), yet is infeasible for many models (e.g. those accessed by API), and uses only scalar feedback. Moreover, RL-tuned generators can be used within Self-Correction. Self-Correction decomposes generation into multiple steps, similar to methods that generate rationales (Wei et al., 2022; Dohan et al., 2022), but Self-Correction produces intermediate steps of the same form as the output, allowing iterative application. Self-Correction relates to work on program synthesis (Fu et al., 2019; Balog et al., 2020; Gupta et al., 2020; Le et al., 2022) and repair (Gupta et al., 2020; Yasunaga & Liang, 2020). Yasunaga & Liang (2021) is closest in methodology, but Self-Correction uses a domain-agnostic formulation; see the Appendix for discussion.
5 CONCLUSION
We introduced self-correctors, a class of models that decompose generation into initial generation and correction steps. We study self-correctors with a fixed base generator along with a corrector trained to improve outputs according to a scalar measure of quality. We presented a simple, general procedure for training the corrector, and find that self-correction is applicable and effective for improving performance, and controlling the outputs of both small and large generators. Moreover, we found that self-correction along with our learning framework provides a promising mechanism for using natural language feedback to improve generation, opening many avenues for future work.
A RELATED WORK
Self-correction provides a flexible framework for improving the performance of off-the-shelf and fine-tuned language models on a wide range of tasks by decomposing generation into a base generator and a corrector. Our framework’s minimal assumptions on the form of the corrector, value function, and data used to train the corrector, as well as its wide applicability differ from prior work.
Learning to fix code. Our work relates to two streams of research in the code domain. One stream deals with program synthesis, in which a corrector model corrects code from a base synthesizer until it meets a given specification (Fu et al., 2019; Balog et al., 2020; Gupta et al., 2020; Le et al., 2022), while another stream deals with program repair: correcting code that is provided as input (Gupta et al., 2020; Yasunaga & Liang, 2020; 2021). Recently, Le et al. (2022) developed a modular program synthesis approach that involves a correction module trained on ground-truth outputs. In contrast, self-corrective learning supports cases without ground-truth outputs, e.g. toxicity.
Closest to our methodology is Yasunaga & Liang (2021). Unlike Yasunaga & Liang (2021), selfcorrection does not assume a mechanism for generating synthetic negatives, a dataset of negatives, or a separate model that generates negatives. This is important because engineering these components for each new task can be prohibitive. Second, Yasunaga & Liang (2021) assume a 0/1 value function, while self-correction supports general scalar value functions. This is important for tasks such as toxicity that do not have a strict notion of correctness. Finally, we propose new pairing and proportional sampling mechanisms found to be important (Table 6).
Iterative text edits. Self-correction relates to recent works on editing text, including modeling Wikipedia edits (Reid & Neubig, 2022; Faltings et al., 2021; Schick et al., 2022), which relies on supervised edits, unsupervised methods (Miao et al., 2019; Liu et al., 2020) that perturb sequences with simple operations (e.g. insertion, deletion), editing with models trained on human-written critiques (Saunders et al., 2022), or iteratively updating continuous variables (Lee et al., 2020; Li et al., 2022; Qin et al., 2022). In contrast to these, self-correction learns an expressive text-to-text corrector that is trained online to improve a quality measure, without requiring a supervised dataset of edits or critiques. Recently, Scheurer et al. (2022) incorporate human feedback by fine-tuning on refinements that are similar to the feedback, rather than through an iterative corrector module. Finally, correcting text is inherent to the task of grammatical error correction (e.g. Lichtarge et al. (2019); Yasunaga et al. (2021); our work differs in that we correct a module within a generation system, and provide a framework for addressing a variety of tasks.
Denoising and reinforcement learning. Separately, denoising ground-truth sequences is a common pretraining objective (Devlin et al., 2019; Lewis et al., 2020; Raffel et al., 2020), while selfcorrection ‘denoises’ generations to improve a scalar quality measure. Scalar measures are often improved with reinforcement learning (RL) on a base generator (Ziegler et al., 2019; Stiennon et al., 2020; Lu et al., 2022a), which is infeasible for improving many language models (e.g. those accessed through an API), and uses only scalar feedback. Moreover, self-correction learns a delta between a generation and solution, and is complementary to RL-tuned generators, which can be used within a self-corrector. Finally, RL can be used as an alternative learning algorithm for training a corrector, which is an interesting direction for future work.
Modular generation. Self-correction decomposes generation into multiple steps, and is thus part of the general class of methods that decompose generation into a ‘cascade’ of modules (Dohan et al., 2022). Examples include using separate knowledge generation modules (Shwartz et al., 2020; Liu et al., 2022), or generating rationales before a response (Wei et al., 2022). Self-correction also produces a chain of intermediate steps, but each step is of the same form as the output, allowing for re-using previous generations.
B ADDITIONAL EXPERIMENTAL DETAILS
B.1 CROSS-EXPERIMENT DETAILS
In all of our experiments we use an off-the-shelf embedding similarity function from SentenceTransformers (Reimers & Gurevych, 2019): sentence-transformers/all-MiniLM-L6-v2.
B.2 MATHEMATICAL PROGRAM SYNTHESIS
We fine-tune a separate instance of GPT-Neo 1.3B as an initial generator, using the Huggingface library with default hyperparameters, except for evaluation steps, which we set to a small number to ensure a strong checkpoint is selected for each dataset. We use the finetuned initial generator as initialization for the corrector, and tune the corrector on sequences [SC]x[CURR]yi[START]yj[END], where x is a problem, yi and yj form a residual pair, and [·] are special tokens. The loss is on tokens after [START].
Feedback. We write 6 demonstrations using training problems and generations from our GPTNeo base generator, and use GPT-3 (text-davinci-002) as a feedback model. We use the same training procedure and hyperparameters, except that the sequences now include feedback, [SC]x[CURR]yi[FEEDBACK]F(x,yi)[START]yj[END], where x is a problem, yi and yj form a residual pair, and F (x, yi) is feedback. We include loss on tokens after [FEEDBACK].
B.3 LEXICALLY-CONSTRAINED GENERATION
Hyper-parameters. Table 8 and Table 9 show hyperparameters for CommonGen and E2E.
Human Evaluation. We evaluate fluency of generations in E2E task using human annotators on Amazon Mechanical Turk (AMT). We randomly sampled 100 instances, along with generations of different baselines and self-corrections. For each instance, we ask 3 annotators to evaluate the fluency of generations on a 3-point Likert scale. We aggregate annotations from 3 annotators using majority vote. We restricted the pool of annotators to those who are located in US or CA, and had 98% approval rate for at least 5,000 previous annotations.
Hyperparameter Assignment
Predictor GPT-2Large steps 6000 batch size 128 optimizer Adam learning rate 1.e−5 decoding alg. beam search (k=5)
Table 8: Hyperparameters for COMMONGEN.
Hyperparameter Assignment
Predictor GPT-2Medium steps 10000 batch size 100 optimizer Adam learning rate 1.e−5 decoding alg. beam search (k=5)
Table 9: Hyperparameters for E2E.
C ADDITIONAL RESULTS
D QUALITATIVE EXAMPLES | 1. What is the main contribution of the paper regarding language model correction?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application to diverse tasks?
3. Do you have any concerns about the naming convention used in the paper or the explanation of the algorithm?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor concerns or suggestions for improvement regarding the presentation of the material? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper introduces a method to correct the outputs of language models using another language model in order to satisfy semantic constraints. A function
v
(
y
)
is needed to measure the quality of the output hypotheses
y
. The corrector is trained to improve this quality while staying close to the original hypothesis. The training is done on value-improving pairs: pairs of outputs
(
y
,
y
′
)
from the data pool which belong to the same input and where there is an improvement in quality (
v
(
y
)
<
v
(
y
′
)
). The data pool is initialized by the base generator but is continously expanded with the corrected outputs, so the corrector can learn to correct its own output even further in the case of multiple iterations. It is also possible to provide feedback to the corrector which improves the results.
The authors study the model and obtain state-of-the-art results on three diverse tasks with different constraints: a mathematical program synthesis task, a text generation task given lexical constraints (which words should be present in the text), and a text generation task where the generated text should not be toxic.
Strengths And Weaknesses
Strengths
The paper is built upon a good idea with practical applicability: the method allows to correct large language models using relatively small computational resources.
The evaluations are also comprehensive with different tasks, baselines, and ablation studies, and the obtained results are excellent.
Weaknesses
My first concern is about self-correction. I believe that the model corrects its mistakes and not itself: it does not reprogram itself to work more efficiently. It is not different in this respect from a genetic algorithm which improves subsequent generations, especially if we combine it with a neural network. I don't see what the presented method adds in comparison to warrant naming it "self-correction". Also, naming the method "self-correction" is too general.
My second concern is about the explanation of the algorithm in 2.1, it could be made clearer. In the first paragraph, "(e.g., a classifier)" doesn't help to understand what
v
(
y
)
is. The second paragraph states that "the algorithm collects a pool of generations, groups them and selects pairs of generation that increase in value and are nearby". I think this could be expanded to be more helpful and precise. The authors could state what the generations are (and that the first generation is from the outputs of the base generator). The algorithm doesn't really group the generations: as far as I understand it works on subsets of the data pool, where each subset corresponds to an input example and contains (among other things) the outputs of the base generator and the iteratively applied corrector for that input. Lastly, the algorithm selects pairs of outputs for the same input which could be from the same generation.
The organization of 2.1. could also be improved somewhat, as the ordering of the phases does not correspond to the order in which they happen in the algorithm. Particularly, I would move Exploration after Initialization and before Pairing.
Minor concerns:
I think it's a stretch that self-corrector paired with NeuroLogic outperforms NeuroLogic-A* in Table 2.
The last line of page 8 should say that Figure 6 is in the Appendix.
Clarity, Quality, Novelty And Reproducibility
The paper is very clearly written, has high quality, is novel and practical. The authors will make the code publicly available upon acceptance, making it easier to reproduce the results. |
ICLR | Title
Domain-slot Relationship Modeling using a Pre-trained Language Encoder for Multi-Domain Dialogue State Tracking
Abstract
Dialogue state tracking for multi-domain dialogues is challenging because the model should be able to track dialogue states across multiple domains and slots. Past studies had its limitations in that they did not factor in the relationship among different domain-slot pairs. Although recent approaches did support relationship modeling among the domain-slot pairs, they did not leverage a pre-trained language model, which has improved the performance of numerous natural language tasks, in the encoding process. Our approach fills the gap between these previous studies. We propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. Inspired by the way the special [CLS] token in BERT is used to aggregate the information of the whole sequence, we use multiple special tokens for each domain-slot pair that encodes information corresponding to its domain and slot. The special tokens are run together with the dialogue context through the pre-trained language encoder, which effectively models the relationship among different domain-slot pairs. Our experimental results show that our model achieves state-of-the-art performance on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset.
1 INTRODUCTION
A task-oriented dialogue system is designed to help humans solve tasks by understanding their needs and providing relevant information accordingly. For example, such a system may assist its user with making a reservation at an appropriate restaurant by understanding the user’s needs for having a nice dinner. It can also recommend an attraction site to a travelling user, accommodating the user’s specific preferences. Dialogue State Tracking (DST) is a core component of these taskoriented dialogue systems, which aims to identify the state of the dialogue between the user and the system. DST represents the dialogue state with triplets of the following items: a domain, a slot, a value. A set of {restaurant, price range, cheap}, or of {train, arrive-by, 7:00 pm} are examples of such triplets. Fig. 1 illustrates an example case of the dialogue state during the course of the conversation between the user and the system. Since a dialogue continues for multiple turns of utterances, the DST model should successfully predict the dialogue state at each turn as the conversation proceeds. For multi-domain conversations, the DST model should be able to track dialogue states across different domains and slots.
Past research on multi-domain conversations used a placeholder in the model to represent domainslot pairs. A domain-slot pair is inserted into the placeholder in each run, and the model runs repeatedly until it covers all types of the domain-slot pairs. (Wu et al., 2019; Zhang et al., 2019; Lee et al., 2019). A DST model generally uses an encoder to extract information from the dialogue context that is relevant to the dialogue state. A typical input for a multi-domain DST model comprises a sequence of the user’s and the system’s utterances up to the turn t, Xt, and the domain-slot information for domain i and slot j, DiSj . In each run, the model feeds the input for a given domain-slot pair through the encoder.
fencoder(Xt, DiSj) for i = 1, · · · , n, j = 1, · · · ,m, (1)
where n and m is the number of domains and slots, respectively. However, because each domain-slot pair is modeled independently, the relationship among the domain-slot pairs can not be learned. For example, if the user first asked for a hotel in a certain place and later asked for a restaurant near that hotel, sharing the information between {hotel, area} and {restaurant, area} would help the model recognize that the restaurant should be in the same area as the hotel.
Recent approaches address these issues by modeling the dialogue state of every domain-slot pair in a single run, given a dialogue context (Chen et al., 2020; Le et al., 2019). This approach can be represented as follows:
fencoder(Xt, D1S1, · · · , DnSm). (2)
Because the encoder receives all of the domain-slot pairs, the model can factor in the relationship among the domain-slot pairs through the encoding process. For the encoder, these studies used models that are trained from scratch, without pre-training. However, since DST involves natural language text for the dialogue context, using a pre-trained language model can help improve the encoding process. Several studies used BERT (Devlin et al., 2019), a pre-trained bidirectional language model, for encoding the dialogue context (Zhang et al., 2019; Lee et al., 2019; Chao & Lane, 2019; Gao et al., 2019), but did not model the dependencies among different domain-slot pairs. Our approach fills the gap between these previous studies. In this work, we propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. We modify the input structure of BERT, specifically the special token part of it, to adjust it for multi-domain DST.
The [CLS] token of BERT (Devlin et al., 2019) is expected to encode the aggregate sequence representation as it runs through BERT, which is used for various downstream tasks such as sentence classification or question answering. This [CLS] token can also be used as an aggregate representation for a given dialogue context. However, in a multi-domain dialogue, a single [CLS] token has to store information for different domain-slot pairs at the same time. In this respect, we propose to use multiple special tokens, one for each domain-slot pair. Using a separate special token for each domain-slot pair is more effective in storing information for different domains and slots since each token can concentrate on its corresponding domain and slot. We consider two different ways to represent such tokens: DS-merge and DS-split. DS-merge employs a single token to represent a single domain-slot pair. For example, to represent a domain-slot pair of {restaurant, area}, we use a special token DS(restaurant,area). DS-split, on the other hand, employs tokens separately for the domain and slot and then merges them into one to represent a domain-slot pair. For {restaurant, area}, the domain token Drestaurant and the slot token Sarea. is computed separately and then merged. We use {DS}merge and {DS}split to represent the special tokens for DS-merge or DS-split, respectively. Unless it is absolutely necessary to specify whether the tokens are from DS-merge or DS-split, we’ll refer to the DS-produced tokens as {DS} tokens, without special distinction, in our descriptions forward. The {DS} tokens, after being encoded by the pre-trained language encoder along with the dialogue context, is used to predict its corresponding domain-slot value for a given dialogue context.
2 RELATED WORKS
Recent work on dialogue state tracking can be largely divided into two groups according to how the slot-values are predicted: fixed-vocabulary and open-vocabulary. The fixed-vocabulary approach, also known as the picklisted-based approach, uses a classification module to predict the dialogue state for each slot from a pre-defined set of candidate values (Zhong et al., 2018; Nouri & HosseiniAsl, 2018; Ramadan et al., 2018; Eric et al., 2019; Lee et al., 2019; Chen et al., 2020). The openvocabulary approach generates the dialogue state for each domain-slot pair either by using a generative decoder to generate text (Wu et al., 2019; Hosseini-Asl et al., 2020) or by extracting text spans from the dialogue history (Gao et al., 2019; Goel et al., 2019; Heck et al., 2020). There is also an approach to use both picklist-based and span-based methods according to the slot type (Zhang et al., 2019).
For models that deal with multi-domain dialogue, how they deal with different domain-slot pairs is another way to divide them. The first approach encodes the dialogue context independent of the domain-slot pairs and uses separate modules for each domain-slot pair (Eric et al., 2019; Gao et al., 2019; Goel et al., 2019; Heck et al., 2020). The second approach encodes the dialogue context using the domain-slot pair information as the prefix and run the encoder multiple times (Nouri & Hosseini-Asl, 2018; Wu et al., 2019). Other approaches encode the dialogue context independently but merges it with domain-slot pair information later with a separate fusion module (Zhong et al., 2018; Ramadan et al., 2018; Lee et al., 2019). However, none of these models are able to model the relationship among different domain-slot pairs because there is no module that enables the interaction between them.
(Le et al., 2019) and (Chen et al., 2020) directly models the relationship among different domainslot pairs. (Le et al., 2019) uses a Fertility decoder to learn potential dependencies across domainslot pairs, but without using a pre-trained language model. Also, their model requires additional data such as system action and delexicalized system responses for its performance. (Chen et al., 2020) also explicitly models the relationship among different domain-slot pairs by using a Graph Attention Network (GAT) (Veličković et al., 2018). Schema graphs, which is the relation graph between domains and slots, are utilized for connecting edges in the GAT. Our work is different from these works in that we leverage the power of a pre-trained language encoder for directly modeling the dependencies among different domain-slot pairs.
(Hosseini-Asl et al., 2020) takes a different approach from the others by using multi-task learning that encompasses DST as well as action and response generation with a generative language model GPT-2 (Radford et al., 2019). However, since our work is focused on DST, we consider the model that is trained on DST only. In the decoding process, dialogue states for different domain-slot pairs are sequentially generated.
3 PROPOSED METHOD
Our model is composed of three parts. The first is the domain-slot-context (DSC) encoder, which encodes the dialogue context along with the special tokens representing domain-slot pairs. Next is slot-gate classifier, which is a preliminary classifier that predicts whether each domain-slot pair is relevant to the dialogue context. The adopted the concept of the slot-gate classifier from (Wu et al., 2019) and made adjustments to apply to our model. The last is the slot value classifier for predicting the value for each domain-slot pair among the candidate values.
In the following descriptions, we assume a dialogue context with a total of T turns. The task is to predict the dialogue state, which are {domain, slot, value} triplets for all domain-slot pairs, for every turn t = 1, · · · , T , using the dialogue context until each turn. Section 3 show the overview of our proposed model.
3.1 DOMAIN-SLOT-CONTEXT ENCODER
The main structure of our model is the DSC encoder, which uses a pre-trained language to encode the dialogue context along with {DS} tokens. For the pre-trained language encoder, we used ALBERT (Lan et al., 2019) due to its strong performance on numerous natural language understanding tasks while having fewer parameters compared to other BERT-style encoders. {DS} tokens work like
the [CLS] token for BERT, encoding information corresponding to its domain-slot pair (DS-merge) or domain and slot (DS-split). The set of special tokens for each layout are shown in Eq. (3) and Eq. (4), respectively. In DS-merge, we used special tokens for each individual domain-slot pair. If there are many domain-slot pairs, using this layout can increase the number of special tokens as each domain-slot pair requires a separate special token. In DS-split, we used separate tokens for the domain and slot. To represent a domain-slot pair, we merged the corresponding tokens from each domain and slot by concatenating them. This promotes modeling compositionality, since the same slot token can be used for different domains. These {DS} tokens and the dialogue context are processed through the DSC encoder, which results in each token in {DS} being encoded with contextualized representations according to its domain and slot.
{DS}merge = {DS(domain(1),slot(1)), · · · , DS(domain(n),slot(m))} (3) {DS}split = {Ddomain(1) , · · · , Ddomain(n) , Sslot(1) , · · · , Sslot(m)} (4)
Fig. 3 shows the input representation of the DSC encoder. The sequence begins with {DS} tokens. The special token [CLS] follows, which encodes the overall information of the dialogue context. For the dialogue context, we added a special token [SEPu] to separate each user or system utterance, which is added at the end of each utterance from the user or system. The input ends with a special token [SEP ] as the end-of-sequence token.
4 types of embeddings are summed up to represent each token embedding. We used the pre-trained word embedding of ALBERT, except for the {DS} tokens, which are randomly initialized. We introduced the token type embedding to differentiate the {DS} tokens, user utterances tokens, and system utterances tokens. For DS-merge, we used a single token type embedding to represent a domain-slot pair, whereas for DS-split, we used two token type embeddings, one for the domain and the other for the slot. We did not apply this embedding for the [CLS] token. Position embeddings are also employed from ALBERT, but the index of the positional embedding starts from the [CLS] token. We did not use the positional embedding for the {DS} tokens as the order within those tokens is meaningless. Lastly, the segment embedding from ALBERT was used to represent the whole sequence as a single segment, which is the default segment embedding of ALBERT.
DSC encoder encodes contextualized embeddings for every input token. However, for the slotgate classifier and slot-value classifier, we only use the special token outputs of the DSC encoder ([CLS] token and {DS} tokens). This is formally defined as follows for DS-merge and DS-split, respectively, for turn t:
D̂S(1,1), · · · , D̂S(n,m), ĈLS = DSCencoder([{DS}merge, CLS,Xt, SEP ]), (5)
D̂1, · · · , D̂n, Ŝ1 · · · , Ŝm, ĈLS = DSCencoder([{DS}split, CLS,Xt, SEP ]), (6)
where Xt represents the dialogue context of (S1, SEPuU1, SEPu, · · · , St, SEPu, U t, SEPu). U t and St represents the utterance for the tth turn for the user and system respectively. The {DS}
tokens and [CLS] token with the hat notation ̂ represents the encoded output of the DSC encoder for those special tokens. They are vectors of Rd, where d is the hidden dimension of ALBERT.
3.2 SLOT-GATE CLASSIFIER
For the slot-gate classifier, we use the DSC encoder output of the {DS} tokens for each domainslot pair to predict whether it is relevant to the dialogue or not. In previous methods, gating used categories of {prediction, dontcare, none}, where prediction means a slot value is not dontcare or none and dontcare means that the predicted slot value is dontcare and none means that the domain-slot is non-relevant. The label for slot-gates are made from the slot-values. However, the performance for the dontcare category was far inferior to the other two categories, so we dismissed the dontcare category and only used {prediction, none}. In our preliminary models with ALBERT large-v2, the prediction and recall for dontcare was 48.87% and 17.21%, respectively. The precision and recall for none showed 98.91%, 99.45% and prediction 96.16%, 94.93%, respectively. In this setting, the dontcare category is included in prediction. For DS-merge, the slot-gate classifier predicts the value using the domain-slot pair special token. For the domain-slot pair of domain i and slot j, the slot-gate classifier output for DS-merge is
GateDiSj = sigmoid ( WGDS(i,j) D̂S(i,j) ) , (7)
where WGDiSj ∈ R 1×d. For DS-split, the slot-gate classifier uses the concatenated output of the corresponding domain and slot token. Similarly, for the same domain-slot pair, the slot-gate classifier output for DS-split is
GateDiSj = sigmoid ( WG(Di,Sj) [ D̂i|Ŝj ]) , (8)
where | represents concatenation of vectors and WG(Di,Sj) ∈ R 1×2d. The loss objective for the gate classification is as follows.set
Lgate = ∑
(i,j)∈DS
BinaryCrossEntropy ( ygateDiSj , GateDiSj ) , (9)
where DS refers to the set of all domain-slot pairs and ygateDiSj is the binary slot-gate label for domain i and slot j. If the domain-slot is predicted to none, the corresponding output of the slot-value classifier is changed into none regardless of the prediction of the slot-value classifier.
3.3 SLOT-VALUE CLASSIFIER
We employ the fixed-vocabulary based classification method for predicting slot values. As in (Zhang et al., 2019), the candidate-value list for each domain-slot pair was constructed by using the values from the training dataset, rather than using the incomplete ontology from the dataset. The [CLS] token is concatenated with each token from {DS}, and used as the input to the slot-value classifier for each domain-slot pair. The slot-value classifier output of domain i and slot j for DS-merge is as follows:
V alueDiSj = softmax ( WVDS(i,j) [ D̂S(i,j)|ĈLS ]) , (10)
where WVDS(i,j) ∈ R nDiSj × 2d and nDiSj is the number of candidate values for domain i and slot j. Similarly, for DS-split, the slot-value classifier output is
V alueDiSj = softmax ( WV(Di,Sj) [ D̂i|Ŝj |ĈLS ]) , (11)
where WV(Di,Sj ) ∈ R nDiSj × 3d. The loss objective for the slot-value classification is as follows:
Lvalue = ∑
(i,j)∈DS
CrossEntropy(yvalueDiSj , V alueDiSj ), (12)
where yvalueDiSj is the label for domain i and slot j.
3.4 TOTAL OBJECTIVE FUNCTION
The DSC encoder, slot-gate classifier and slot-value classifier is jointly trained under the total objective function below.
Ltotal = Lgate + Lvalue (13)
4 EXPERIMENT SETUP AND RESULTS
We evaluate our model using the joint goal accuracy, which considers a model prediction to be correct when the prediction jointly matches the ground truth values for all domain-slot pairs, given a dialogue context.
4.1 DATASET
We use the MultiWOZ-2.1 (Eric et al., 2019) and MultiWOZ-2.2 dataset (Zang et al., 2020), both of which fixed noisy annotations and dialogue utterances of the MultiWOZ 2.0 dataset (Budzianowski et al., 2018). The dataset contains 7 domains and over 10,000 dialogues. We follow the previous studies and use 5 domains (train, restaurant, hotel, taxi, attraction) with 30 domain-slot pairs. The other two domains (police, hospital) have little data and do not appear in the test dataset. For MultiWOZ-2.1, we follow the pre-processing explained in (Wu et al., 2019). For MultiWOZ-2.2, we use the raw data as given without any pre-processing.
4.2 SETUP
For the pre-trained language encoder, we used ALBERT(Lan et al., 2019) from HuggingFace (Wolf et al., 2019) in Pytorch (Paszke et al., 2019). We used the xxlarge-v2 version of ALBERT for the main experiment and compare other versions (base-v2, large-v2) in the analysis section. We also compared RoBERTa (Liu et al., 2019) to generalizability of our model. The optimizer was AdamW (Loshchilov & Hutter, 2018) with a learning rate of 1e−5 for ALBERT-xlarge-v2, ALBERT-xxlargev2 and RoBERTa-large and 5e−5 for ALBERT-base-v2, ALBERT-large-v2 and RoBERTa-base. We applied linear warm-up followed by linear decay for the learning rate. We trained all models with the effective batch size of 32, using gradient accumulation for bigger ALBERT models. Models were
selected based on their joint goal accuracy on the validation data split. Only the training data was used to build the labels for each domain-slot pair. We used two NVIDIA V100 for our training. The original ALBERT was pre-trained with a sequence length of up to 512 tokens. However, dialogues that are longer than 512 tokens exists in the data. Usually, the standard procedure for this situation is to truncate the sequence up to 512 tokens and discard the remaining tokens. However, to cover dialogues longer than 512 tokens that are in the dataset, we resized the positional embedding to cover a maximum length of the dialogue. We preserved the original pre-trained position embedding for positions indices up to 512 and randomly initialized the remaining position indices. This method showed better results than limiting the maximum sequence length to 512. We plan to release our code on Github.
4.3 RESULTS
Table 1 shows the joint goal accuracy of our model compared to previous methods. Both of our models show better performance among models without any additional supervision other than the dialogue context and domain-slot pair labels. Especially, the DS-split, ALBERT-xxlarge-v2 version of our proposed model achieves state-of-the-art result on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset, without any form of extra supervision. However, in smaller models, The model with DSsplit shows better results than the model with DS-merge. This shows that in models with enough capacity, the slot-sharing of DS-split was more effective. However, this was not the case for smaller ALBERT models, which is explained in Section 4.4.2. This is important in that scalability is much better for DS-split than DS-merge, as many slots can be shared across different domains, reducing the number of special tokens to be used. We show the individual domain-slot accuracy in Appendix A.2, Table 4.
4.4 ANALYSIS
In this section, we show that relationship modeling among different domain-slot pairs is indeed the key factor of our proposed model by running ablation studies. Also, we compare the effect of the size and type of the pre-trained language encoder in terms of performance.
4.4.1 RELATIONSHIP MODELING AMONG DIFFERENT DOMAIN-SLOT PAIRS
First, we did not use any {DS} tokens and only used the CLS token. Because there are no dedicated special tokens for each domain-slot pair, the performance is very poor as shown in ’None’ row in Table 2. This shows that our approach to introduce {DS} is effective. Next, to evaluate the effect of relationship modeling among different domain-slot pairs, we blocked the attention among different {DS} tokens during the encoding process, which restricts direct interaction among {DS} tokens. Table 2 shows that without the relationship modeling, our model performance deteriorates by a substantial amount. This validates our idea that relationship modeling is the crucial factor for our approach.
In the Appendix A.1, we show some examples of wrong predictions that models without direct relationship modeling has made.
4.4.2 SIZE AND TYPE OF THE PRE-TRAINED LANGUAGE ENCODER
We compared ALBERT and RoBERTa (Liu et al., 2019) and various model sizes within those pretrained language encoders. Table 3 shows the result for different versions of the pre-trained language encoders. For ALBERT, a bigger language model shows better results as is shown in various downstream tasks that ALBERT was evaluated on (Lan et al., 2019). Except for ALBERT-xx-large, all other configurations show that DS-merge shows better performance than DS-split. Based on the drastic increase in performance with xx-large, we presume that the high model complexity of ALBERTxx-large enabled {DS}split tokens to effectively encode information and make slot-sharing to work. In smaller models, this slot-sharing might not have been as effective due to their smaller encoding capacity. Also, concatenation, which was used for merging domain and slot embeddings in DS-split, might not have been enough for fully representing the information for the domain-slot pair in smaller models. RoBERTa also shows similar results with bigger models showing stronger performance.
4.4.3 LEARNING CURVE
Fig. 4 shows the learning curve of the ALBERT-xxlarge-v2 on the MultiWOZ-2.2 dataset. The joint goal accuracy steadily increases after the slot-value loss plateaus.
5 CONCLUSION
In this paper, we propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. We introduced two methods to represent special tokens for each domain-slot pair: DS-merge and DS-split. These tokens work like the [CLS] token for BERT, encoding information corresponding to its domain-slot pair (DS-merge) or domain and slot (DS-split). These special tokens are run together with the dialogue context through the pre-trained language encoder, which enables modeling the relationship among different domain-slot pairs. Experimental results show that our model achieves state-of-the-art performance on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset. The ablation experiments show that the relationship modeling among different domain-slot pairs is the key element of our model. Also, we showed that larger pre-trained language encoders improves performance. We hope to advance our research by finding ways to effectively apply our model towards the open-vocabulary approach, which will enable better generalization for candidate values that are outside of the training data.
ACKNOWLEDGMENTS
This research was supported and funded by the Korean National Police Agency. [Pol-Bot Development for Conversational Police Knowledge Services / PR09-01-000-20]
A APPENDIX
A.1 RELATIONSHIP MODELING EXAMPLES
A.1.1 EXAMPLE 1
Fig. 5 shows an example of a wrong prediction that the model without domain-slot relationship modeling makes. The value for {taxi, departure} is not explicitly mentioned in the dialogue context. However, Our full model correctly predicts the value for {taxi, departure}, which can be inferred from the dialogue context and {hotel, name}. However, the model without relationship modeling fails to predict the correct value for {taxi, departure}. User: i am staying in cambridge soon and would like to stay at a and b guest house.
System: sure, how many days and how many people?
User: we are staying 6 people for 4 nights starting from tuesday. i need the reference number
System: your booking is successful! your reference number is iigra0mi. do you need anything else?
User: yeas, what to recommend if i want to see good architecture in the west part of town?
System: unfortunately there is no good architecture on the west end but i can look in other parts of town if you want
User: what about a museum?
System: what part of town there are none in the west.
User: there are no museums in the west at all?
System: sorry about that, there are actually 7 in that area.
User: great, can i get the postcode, entrance fee and address of 1 of them?
System: cafe jello gallery has a free entrance fee. the address is cafe jello gallery, 13 magdalene street and the post code is cb30af. can i help you with anything else?
User: yes please. i need a taxi to commute.
System: when would you like to leave and arrive?
User: i would like to get to the gallery by 13:45, please.
System: sure, lookout for a blue volvo the contact number is 07941424083. can i help with anything else?
User: that is all for now. thank you so much
A.1.2 EXAMPLE 2
Fig. 6 also shows an example of a wrong prediction that the model without domain-slot relationship modeling makes. The value for {train, day} is not explicitly mentioned in the dialogue context. In a similar manner from the example above, it can be referred from the {restaurant, book day}. User: i would like to find a particular restaurant in cambridge. the name of the restaurant is restaurant 2 two. could you give me the location?
System: restaurant 2 two is nice french restaurant located at 22 chesterton road chesterton. would like me to book you a table?
User: that would be great. i need it for 8 on friday.
System: do you have a time preference?
User: yes at 11:15 if that is not available i can do 10:15
System: the booking for 10:15 was successful they will reserve the table for 15 minutes. the reference number is 6b5z7vj5.
User: thanks. can you help me find a train, too? i want to leave cambridge some time after 12:15.
A.2 INDIVIDUAL SLOT ACCURACY
Table 4 shows the individual domain-slot accuracy for the ALBERT-xxlarge-v2 model on the MultiWOZ-2.2 dataset. | 1. How does the paper's approach differ from previous methods in modeling sentence and domain-slot relationships?
2. What are the limitations of the experimental results presented in the paper?
3. How does the use of domain/slot as vocabulary tags or word embeddings impact the performance of the domain split/merge method?
4. Can you explain the purpose and potential benefits of incorporating multi-domain domain-slot pairs into BERT input?
5. How does feeding the output of the slot-gate classifier into the slot-value classifier affect the overall performance of the model? | Review | Review
Pros • This paper incorporates the multi-domain domain-slot pairs into the bert input so that the relations between sentences, domain-slot are modeled. Cons • It’s better to experiment on more dataset to prove the method’s effectiveness. Comments • This paper https://arxiv.org/abs/2006.01554 seems get better performance on the same dataset. • In the domain split/merge method, does the domain/slot used as vocabulary tag or word embedding? • The slot-gate classifier’s output is fed into the slot-value classifier, how does this affect the performance? |
ICLR | Title
Domain-slot Relationship Modeling using a Pre-trained Language Encoder for Multi-Domain Dialogue State Tracking
Abstract
Dialogue state tracking for multi-domain dialogues is challenging because the model should be able to track dialogue states across multiple domains and slots. Past studies had its limitations in that they did not factor in the relationship among different domain-slot pairs. Although recent approaches did support relationship modeling among the domain-slot pairs, they did not leverage a pre-trained language model, which has improved the performance of numerous natural language tasks, in the encoding process. Our approach fills the gap between these previous studies. We propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. Inspired by the way the special [CLS] token in BERT is used to aggregate the information of the whole sequence, we use multiple special tokens for each domain-slot pair that encodes information corresponding to its domain and slot. The special tokens are run together with the dialogue context through the pre-trained language encoder, which effectively models the relationship among different domain-slot pairs. Our experimental results show that our model achieves state-of-the-art performance on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset.
1 INTRODUCTION
A task-oriented dialogue system is designed to help humans solve tasks by understanding their needs and providing relevant information accordingly. For example, such a system may assist its user with making a reservation at an appropriate restaurant by understanding the user’s needs for having a nice dinner. It can also recommend an attraction site to a travelling user, accommodating the user’s specific preferences. Dialogue State Tracking (DST) is a core component of these taskoriented dialogue systems, which aims to identify the state of the dialogue between the user and the system. DST represents the dialogue state with triplets of the following items: a domain, a slot, a value. A set of {restaurant, price range, cheap}, or of {train, arrive-by, 7:00 pm} are examples of such triplets. Fig. 1 illustrates an example case of the dialogue state during the course of the conversation between the user and the system. Since a dialogue continues for multiple turns of utterances, the DST model should successfully predict the dialogue state at each turn as the conversation proceeds. For multi-domain conversations, the DST model should be able to track dialogue states across different domains and slots.
Past research on multi-domain conversations used a placeholder in the model to represent domainslot pairs. A domain-slot pair is inserted into the placeholder in each run, and the model runs repeatedly until it covers all types of the domain-slot pairs. (Wu et al., 2019; Zhang et al., 2019; Lee et al., 2019). A DST model generally uses an encoder to extract information from the dialogue context that is relevant to the dialogue state. A typical input for a multi-domain DST model comprises a sequence of the user’s and the system’s utterances up to the turn t, Xt, and the domain-slot information for domain i and slot j, DiSj . In each run, the model feeds the input for a given domain-slot pair through the encoder.
fencoder(Xt, DiSj) for i = 1, · · · , n, j = 1, · · · ,m, (1)
where n and m is the number of domains and slots, respectively. However, because each domain-slot pair is modeled independently, the relationship among the domain-slot pairs can not be learned. For example, if the user first asked for a hotel in a certain place and later asked for a restaurant near that hotel, sharing the information between {hotel, area} and {restaurant, area} would help the model recognize that the restaurant should be in the same area as the hotel.
Recent approaches address these issues by modeling the dialogue state of every domain-slot pair in a single run, given a dialogue context (Chen et al., 2020; Le et al., 2019). This approach can be represented as follows:
fencoder(Xt, D1S1, · · · , DnSm). (2)
Because the encoder receives all of the domain-slot pairs, the model can factor in the relationship among the domain-slot pairs through the encoding process. For the encoder, these studies used models that are trained from scratch, without pre-training. However, since DST involves natural language text for the dialogue context, using a pre-trained language model can help improve the encoding process. Several studies used BERT (Devlin et al., 2019), a pre-trained bidirectional language model, for encoding the dialogue context (Zhang et al., 2019; Lee et al., 2019; Chao & Lane, 2019; Gao et al., 2019), but did not model the dependencies among different domain-slot pairs. Our approach fills the gap between these previous studies. In this work, we propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. We modify the input structure of BERT, specifically the special token part of it, to adjust it for multi-domain DST.
The [CLS] token of BERT (Devlin et al., 2019) is expected to encode the aggregate sequence representation as it runs through BERT, which is used for various downstream tasks such as sentence classification or question answering. This [CLS] token can also be used as an aggregate representation for a given dialogue context. However, in a multi-domain dialogue, a single [CLS] token has to store information for different domain-slot pairs at the same time. In this respect, we propose to use multiple special tokens, one for each domain-slot pair. Using a separate special token for each domain-slot pair is more effective in storing information for different domains and slots since each token can concentrate on its corresponding domain and slot. We consider two different ways to represent such tokens: DS-merge and DS-split. DS-merge employs a single token to represent a single domain-slot pair. For example, to represent a domain-slot pair of {restaurant, area}, we use a special token DS(restaurant,area). DS-split, on the other hand, employs tokens separately for the domain and slot and then merges them into one to represent a domain-slot pair. For {restaurant, area}, the domain token Drestaurant and the slot token Sarea. is computed separately and then merged. We use {DS}merge and {DS}split to represent the special tokens for DS-merge or DS-split, respectively. Unless it is absolutely necessary to specify whether the tokens are from DS-merge or DS-split, we’ll refer to the DS-produced tokens as {DS} tokens, without special distinction, in our descriptions forward. The {DS} tokens, after being encoded by the pre-trained language encoder along with the dialogue context, is used to predict its corresponding domain-slot value for a given dialogue context.
2 RELATED WORKS
Recent work on dialogue state tracking can be largely divided into two groups according to how the slot-values are predicted: fixed-vocabulary and open-vocabulary. The fixed-vocabulary approach, also known as the picklisted-based approach, uses a classification module to predict the dialogue state for each slot from a pre-defined set of candidate values (Zhong et al., 2018; Nouri & HosseiniAsl, 2018; Ramadan et al., 2018; Eric et al., 2019; Lee et al., 2019; Chen et al., 2020). The openvocabulary approach generates the dialogue state for each domain-slot pair either by using a generative decoder to generate text (Wu et al., 2019; Hosseini-Asl et al., 2020) or by extracting text spans from the dialogue history (Gao et al., 2019; Goel et al., 2019; Heck et al., 2020). There is also an approach to use both picklist-based and span-based methods according to the slot type (Zhang et al., 2019).
For models that deal with multi-domain dialogue, how they deal with different domain-slot pairs is another way to divide them. The first approach encodes the dialogue context independent of the domain-slot pairs and uses separate modules for each domain-slot pair (Eric et al., 2019; Gao et al., 2019; Goel et al., 2019; Heck et al., 2020). The second approach encodes the dialogue context using the domain-slot pair information as the prefix and run the encoder multiple times (Nouri & Hosseini-Asl, 2018; Wu et al., 2019). Other approaches encode the dialogue context independently but merges it with domain-slot pair information later with a separate fusion module (Zhong et al., 2018; Ramadan et al., 2018; Lee et al., 2019). However, none of these models are able to model the relationship among different domain-slot pairs because there is no module that enables the interaction between them.
(Le et al., 2019) and (Chen et al., 2020) directly models the relationship among different domainslot pairs. (Le et al., 2019) uses a Fertility decoder to learn potential dependencies across domainslot pairs, but without using a pre-trained language model. Also, their model requires additional data such as system action and delexicalized system responses for its performance. (Chen et al., 2020) also explicitly models the relationship among different domain-slot pairs by using a Graph Attention Network (GAT) (Veličković et al., 2018). Schema graphs, which is the relation graph between domains and slots, are utilized for connecting edges in the GAT. Our work is different from these works in that we leverage the power of a pre-trained language encoder for directly modeling the dependencies among different domain-slot pairs.
(Hosseini-Asl et al., 2020) takes a different approach from the others by using multi-task learning that encompasses DST as well as action and response generation with a generative language model GPT-2 (Radford et al., 2019). However, since our work is focused on DST, we consider the model that is trained on DST only. In the decoding process, dialogue states for different domain-slot pairs are sequentially generated.
3 PROPOSED METHOD
Our model is composed of three parts. The first is the domain-slot-context (DSC) encoder, which encodes the dialogue context along with the special tokens representing domain-slot pairs. Next is slot-gate classifier, which is a preliminary classifier that predicts whether each domain-slot pair is relevant to the dialogue context. The adopted the concept of the slot-gate classifier from (Wu et al., 2019) and made adjustments to apply to our model. The last is the slot value classifier for predicting the value for each domain-slot pair among the candidate values.
In the following descriptions, we assume a dialogue context with a total of T turns. The task is to predict the dialogue state, which are {domain, slot, value} triplets for all domain-slot pairs, for every turn t = 1, · · · , T , using the dialogue context until each turn. Section 3 show the overview of our proposed model.
3.1 DOMAIN-SLOT-CONTEXT ENCODER
The main structure of our model is the DSC encoder, which uses a pre-trained language to encode the dialogue context along with {DS} tokens. For the pre-trained language encoder, we used ALBERT (Lan et al., 2019) due to its strong performance on numerous natural language understanding tasks while having fewer parameters compared to other BERT-style encoders. {DS} tokens work like
the [CLS] token for BERT, encoding information corresponding to its domain-slot pair (DS-merge) or domain and slot (DS-split). The set of special tokens for each layout are shown in Eq. (3) and Eq. (4), respectively. In DS-merge, we used special tokens for each individual domain-slot pair. If there are many domain-slot pairs, using this layout can increase the number of special tokens as each domain-slot pair requires a separate special token. In DS-split, we used separate tokens for the domain and slot. To represent a domain-slot pair, we merged the corresponding tokens from each domain and slot by concatenating them. This promotes modeling compositionality, since the same slot token can be used for different domains. These {DS} tokens and the dialogue context are processed through the DSC encoder, which results in each token in {DS} being encoded with contextualized representations according to its domain and slot.
{DS}merge = {DS(domain(1),slot(1)), · · · , DS(domain(n),slot(m))} (3) {DS}split = {Ddomain(1) , · · · , Ddomain(n) , Sslot(1) , · · · , Sslot(m)} (4)
Fig. 3 shows the input representation of the DSC encoder. The sequence begins with {DS} tokens. The special token [CLS] follows, which encodes the overall information of the dialogue context. For the dialogue context, we added a special token [SEPu] to separate each user or system utterance, which is added at the end of each utterance from the user or system. The input ends with a special token [SEP ] as the end-of-sequence token.
4 types of embeddings are summed up to represent each token embedding. We used the pre-trained word embedding of ALBERT, except for the {DS} tokens, which are randomly initialized. We introduced the token type embedding to differentiate the {DS} tokens, user utterances tokens, and system utterances tokens. For DS-merge, we used a single token type embedding to represent a domain-slot pair, whereas for DS-split, we used two token type embeddings, one for the domain and the other for the slot. We did not apply this embedding for the [CLS] token. Position embeddings are also employed from ALBERT, but the index of the positional embedding starts from the [CLS] token. We did not use the positional embedding for the {DS} tokens as the order within those tokens is meaningless. Lastly, the segment embedding from ALBERT was used to represent the whole sequence as a single segment, which is the default segment embedding of ALBERT.
DSC encoder encodes contextualized embeddings for every input token. However, for the slotgate classifier and slot-value classifier, we only use the special token outputs of the DSC encoder ([CLS] token and {DS} tokens). This is formally defined as follows for DS-merge and DS-split, respectively, for turn t:
D̂S(1,1), · · · , D̂S(n,m), ĈLS = DSCencoder([{DS}merge, CLS,Xt, SEP ]), (5)
D̂1, · · · , D̂n, Ŝ1 · · · , Ŝm, ĈLS = DSCencoder([{DS}split, CLS,Xt, SEP ]), (6)
where Xt represents the dialogue context of (S1, SEPuU1, SEPu, · · · , St, SEPu, U t, SEPu). U t and St represents the utterance for the tth turn for the user and system respectively. The {DS}
tokens and [CLS] token with the hat notation ̂ represents the encoded output of the DSC encoder for those special tokens. They are vectors of Rd, where d is the hidden dimension of ALBERT.
3.2 SLOT-GATE CLASSIFIER
For the slot-gate classifier, we use the DSC encoder output of the {DS} tokens for each domainslot pair to predict whether it is relevant to the dialogue or not. In previous methods, gating used categories of {prediction, dontcare, none}, where prediction means a slot value is not dontcare or none and dontcare means that the predicted slot value is dontcare and none means that the domain-slot is non-relevant. The label for slot-gates are made from the slot-values. However, the performance for the dontcare category was far inferior to the other two categories, so we dismissed the dontcare category and only used {prediction, none}. In our preliminary models with ALBERT large-v2, the prediction and recall for dontcare was 48.87% and 17.21%, respectively. The precision and recall for none showed 98.91%, 99.45% and prediction 96.16%, 94.93%, respectively. In this setting, the dontcare category is included in prediction. For DS-merge, the slot-gate classifier predicts the value using the domain-slot pair special token. For the domain-slot pair of domain i and slot j, the slot-gate classifier output for DS-merge is
GateDiSj = sigmoid ( WGDS(i,j) D̂S(i,j) ) , (7)
where WGDiSj ∈ R 1×d. For DS-split, the slot-gate classifier uses the concatenated output of the corresponding domain and slot token. Similarly, for the same domain-slot pair, the slot-gate classifier output for DS-split is
GateDiSj = sigmoid ( WG(Di,Sj) [ D̂i|Ŝj ]) , (8)
where | represents concatenation of vectors and WG(Di,Sj) ∈ R 1×2d. The loss objective for the gate classification is as follows.set
Lgate = ∑
(i,j)∈DS
BinaryCrossEntropy ( ygateDiSj , GateDiSj ) , (9)
where DS refers to the set of all domain-slot pairs and ygateDiSj is the binary slot-gate label for domain i and slot j. If the domain-slot is predicted to none, the corresponding output of the slot-value classifier is changed into none regardless of the prediction of the slot-value classifier.
3.3 SLOT-VALUE CLASSIFIER
We employ the fixed-vocabulary based classification method for predicting slot values. As in (Zhang et al., 2019), the candidate-value list for each domain-slot pair was constructed by using the values from the training dataset, rather than using the incomplete ontology from the dataset. The [CLS] token is concatenated with each token from {DS}, and used as the input to the slot-value classifier for each domain-slot pair. The slot-value classifier output of domain i and slot j for DS-merge is as follows:
V alueDiSj = softmax ( WVDS(i,j) [ D̂S(i,j)|ĈLS ]) , (10)
where WVDS(i,j) ∈ R nDiSj × 2d and nDiSj is the number of candidate values for domain i and slot j. Similarly, for DS-split, the slot-value classifier output is
V alueDiSj = softmax ( WV(Di,Sj) [ D̂i|Ŝj |ĈLS ]) , (11)
where WV(Di,Sj ) ∈ R nDiSj × 3d. The loss objective for the slot-value classification is as follows:
Lvalue = ∑
(i,j)∈DS
CrossEntropy(yvalueDiSj , V alueDiSj ), (12)
where yvalueDiSj is the label for domain i and slot j.
3.4 TOTAL OBJECTIVE FUNCTION
The DSC encoder, slot-gate classifier and slot-value classifier is jointly trained under the total objective function below.
Ltotal = Lgate + Lvalue (13)
4 EXPERIMENT SETUP AND RESULTS
We evaluate our model using the joint goal accuracy, which considers a model prediction to be correct when the prediction jointly matches the ground truth values for all domain-slot pairs, given a dialogue context.
4.1 DATASET
We use the MultiWOZ-2.1 (Eric et al., 2019) and MultiWOZ-2.2 dataset (Zang et al., 2020), both of which fixed noisy annotations and dialogue utterances of the MultiWOZ 2.0 dataset (Budzianowski et al., 2018). The dataset contains 7 domains and over 10,000 dialogues. We follow the previous studies and use 5 domains (train, restaurant, hotel, taxi, attraction) with 30 domain-slot pairs. The other two domains (police, hospital) have little data and do not appear in the test dataset. For MultiWOZ-2.1, we follow the pre-processing explained in (Wu et al., 2019). For MultiWOZ-2.2, we use the raw data as given without any pre-processing.
4.2 SETUP
For the pre-trained language encoder, we used ALBERT(Lan et al., 2019) from HuggingFace (Wolf et al., 2019) in Pytorch (Paszke et al., 2019). We used the xxlarge-v2 version of ALBERT for the main experiment and compare other versions (base-v2, large-v2) in the analysis section. We also compared RoBERTa (Liu et al., 2019) to generalizability of our model. The optimizer was AdamW (Loshchilov & Hutter, 2018) with a learning rate of 1e−5 for ALBERT-xlarge-v2, ALBERT-xxlargev2 and RoBERTa-large and 5e−5 for ALBERT-base-v2, ALBERT-large-v2 and RoBERTa-base. We applied linear warm-up followed by linear decay for the learning rate. We trained all models with the effective batch size of 32, using gradient accumulation for bigger ALBERT models. Models were
selected based on their joint goal accuracy on the validation data split. Only the training data was used to build the labels for each domain-slot pair. We used two NVIDIA V100 for our training. The original ALBERT was pre-trained with a sequence length of up to 512 tokens. However, dialogues that are longer than 512 tokens exists in the data. Usually, the standard procedure for this situation is to truncate the sequence up to 512 tokens and discard the remaining tokens. However, to cover dialogues longer than 512 tokens that are in the dataset, we resized the positional embedding to cover a maximum length of the dialogue. We preserved the original pre-trained position embedding for positions indices up to 512 and randomly initialized the remaining position indices. This method showed better results than limiting the maximum sequence length to 512. We plan to release our code on Github.
4.3 RESULTS
Table 1 shows the joint goal accuracy of our model compared to previous methods. Both of our models show better performance among models without any additional supervision other than the dialogue context and domain-slot pair labels. Especially, the DS-split, ALBERT-xxlarge-v2 version of our proposed model achieves state-of-the-art result on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset, without any form of extra supervision. However, in smaller models, The model with DSsplit shows better results than the model with DS-merge. This shows that in models with enough capacity, the slot-sharing of DS-split was more effective. However, this was not the case for smaller ALBERT models, which is explained in Section 4.4.2. This is important in that scalability is much better for DS-split than DS-merge, as many slots can be shared across different domains, reducing the number of special tokens to be used. We show the individual domain-slot accuracy in Appendix A.2, Table 4.
4.4 ANALYSIS
In this section, we show that relationship modeling among different domain-slot pairs is indeed the key factor of our proposed model by running ablation studies. Also, we compare the effect of the size and type of the pre-trained language encoder in terms of performance.
4.4.1 RELATIONSHIP MODELING AMONG DIFFERENT DOMAIN-SLOT PAIRS
First, we did not use any {DS} tokens and only used the CLS token. Because there are no dedicated special tokens for each domain-slot pair, the performance is very poor as shown in ’None’ row in Table 2. This shows that our approach to introduce {DS} is effective. Next, to evaluate the effect of relationship modeling among different domain-slot pairs, we blocked the attention among different {DS} tokens during the encoding process, which restricts direct interaction among {DS} tokens. Table 2 shows that without the relationship modeling, our model performance deteriorates by a substantial amount. This validates our idea that relationship modeling is the crucial factor for our approach.
In the Appendix A.1, we show some examples of wrong predictions that models without direct relationship modeling has made.
4.4.2 SIZE AND TYPE OF THE PRE-TRAINED LANGUAGE ENCODER
We compared ALBERT and RoBERTa (Liu et al., 2019) and various model sizes within those pretrained language encoders. Table 3 shows the result for different versions of the pre-trained language encoders. For ALBERT, a bigger language model shows better results as is shown in various downstream tasks that ALBERT was evaluated on (Lan et al., 2019). Except for ALBERT-xx-large, all other configurations show that DS-merge shows better performance than DS-split. Based on the drastic increase in performance with xx-large, we presume that the high model complexity of ALBERTxx-large enabled {DS}split tokens to effectively encode information and make slot-sharing to work. In smaller models, this slot-sharing might not have been as effective due to their smaller encoding capacity. Also, concatenation, which was used for merging domain and slot embeddings in DS-split, might not have been enough for fully representing the information for the domain-slot pair in smaller models. RoBERTa also shows similar results with bigger models showing stronger performance.
4.4.3 LEARNING CURVE
Fig. 4 shows the learning curve of the ALBERT-xxlarge-v2 on the MultiWOZ-2.2 dataset. The joint goal accuracy steadily increases after the slot-value loss plateaus.
5 CONCLUSION
In this paper, we propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. We introduced two methods to represent special tokens for each domain-slot pair: DS-merge and DS-split. These tokens work like the [CLS] token for BERT, encoding information corresponding to its domain-slot pair (DS-merge) or domain and slot (DS-split). These special tokens are run together with the dialogue context through the pre-trained language encoder, which enables modeling the relationship among different domain-slot pairs. Experimental results show that our model achieves state-of-the-art performance on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset. The ablation experiments show that the relationship modeling among different domain-slot pairs is the key element of our model. Also, we showed that larger pre-trained language encoders improves performance. We hope to advance our research by finding ways to effectively apply our model towards the open-vocabulary approach, which will enable better generalization for candidate values that are outside of the training data.
ACKNOWLEDGMENTS
This research was supported and funded by the Korean National Police Agency. [Pol-Bot Development for Conversational Police Knowledge Services / PR09-01-000-20]
A APPENDIX
A.1 RELATIONSHIP MODELING EXAMPLES
A.1.1 EXAMPLE 1
Fig. 5 shows an example of a wrong prediction that the model without domain-slot relationship modeling makes. The value for {taxi, departure} is not explicitly mentioned in the dialogue context. However, Our full model correctly predicts the value for {taxi, departure}, which can be inferred from the dialogue context and {hotel, name}. However, the model without relationship modeling fails to predict the correct value for {taxi, departure}. User: i am staying in cambridge soon and would like to stay at a and b guest house.
System: sure, how many days and how many people?
User: we are staying 6 people for 4 nights starting from tuesday. i need the reference number
System: your booking is successful! your reference number is iigra0mi. do you need anything else?
User: yeas, what to recommend if i want to see good architecture in the west part of town?
System: unfortunately there is no good architecture on the west end but i can look in other parts of town if you want
User: what about a museum?
System: what part of town there are none in the west.
User: there are no museums in the west at all?
System: sorry about that, there are actually 7 in that area.
User: great, can i get the postcode, entrance fee and address of 1 of them?
System: cafe jello gallery has a free entrance fee. the address is cafe jello gallery, 13 magdalene street and the post code is cb30af. can i help you with anything else?
User: yes please. i need a taxi to commute.
System: when would you like to leave and arrive?
User: i would like to get to the gallery by 13:45, please.
System: sure, lookout for a blue volvo the contact number is 07941424083. can i help with anything else?
User: that is all for now. thank you so much
A.1.2 EXAMPLE 2
Fig. 6 also shows an example of a wrong prediction that the model without domain-slot relationship modeling makes. The value for {train, day} is not explicitly mentioned in the dialogue context. In a similar manner from the example above, it can be referred from the {restaurant, book day}. User: i would like to find a particular restaurant in cambridge. the name of the restaurant is restaurant 2 two. could you give me the location?
System: restaurant 2 two is nice french restaurant located at 22 chesterton road chesterton. would like me to book you a table?
User: that would be great. i need it for 8 on friday.
System: do you have a time preference?
User: yes at 11:15 if that is not available i can do 10:15
System: the booking for 10:15 was successful they will reserve the table for 15 minutes. the reference number is 6b5z7vj5.
User: thanks. can you help me find a train, too? i want to leave cambridge some time after 12:15.
A.2 INDIVIDUAL SLOT ACCURACY
Table 4 shows the individual domain-slot accuracy for the ALBERT-xxlarge-v2 model on the MultiWOZ-2.2 dataset. | 1. What is the purpose of segment embedding in the proposed approach?
2. Can you provide more information about the label cleaning method used in computing joint goal accuracy?
3. How much of the performance drop in DST is due to blocking attention across different DS tokens, and how do you evaluate the importance of the proposed approach?
4. Would it be possible to evaluate the robustness of the approach by testing it on other pre-trained encoders such as BERT and Roberta?
5. Why does Albert-xxlarge achieve significantly higher performance compared to other encoders when using the same tokenization and domain-slot relationship?
6. Is there any inconsistency in the results regarding the effectiveness of the domain-slot relationship depending on the choice of encoder?
7. How does the proposed architecture differ from the TRADE model, aside from the use of a more powerful pre-trained encoder? | Review | Review
Summary:
This paper proposed a new approach for modeling multi-domain dialogue state tracking by incorporating domain-slot relationship using a pre-trained language encoder. The proposed approach are based on using special tokens to mode l such relationship. Two kinds of special tokens are proposed to represent domain-slot pair, DS_merge token for each specific pair, and tokens for every domain and slots separately
Comments:
1- what is the role of segment embedding (Fig3) for DST tank? Are two different segment used in the pretraining of model?
2- Table 1: 2-1 what type of label cleaning is used to compute joint goal accuracy? From SimpleTOD paper, each baseline has used different label cleaning 2-2: DS-split is bolded, while it is lower than SimpleTOD
3- Table 2: the results indicate that DST performance drops in total by blocking attention across different DS tokens. However, it is not clear how much of the performance drop belongs to turns with cross-domain related slots. The figure 4 only present one example of this case, which might not be correct for all wrong predictions. Also, it is helpful to report DST for single domain and to evaluate the importance of proposed approach.
4- Since the proposed approach can be used on any pretrained encoder, the evaluation on BERT and/or Roberta is helpful to understand the robustness of approach to the choice of pretrained encoder.
Post rebuttal:
Table 3: the results indicates that only Albert-xxlarge achieve very high performance (222M). however, comparable models to other approaches, such as Roberta-base or albert-xlarge achieved around ~57% performance which is within margin of previous arts. for example, SimpleTOD with gpt2-base (124M) achieved 55.7% and ConvBert achieved 58%. Therefor, it is unclear why Albert-xxlarge get so much higher performance compared to other encoders, since same tokenization and domain-slot relation is used. Based on results in Table 3, there is inconsistency in which domain-slot relation does not always results in better performance, and it depends on the choice of encoder too.
Overall, the proposed architecture is very similar to TRADE model, in terms of using an encoder for dialogue history, slot-gate and slot-value classifier. The only difference is in using a much powerful pretrained encoder. |
ICLR | Title
Domain-slot Relationship Modeling using a Pre-trained Language Encoder for Multi-Domain Dialogue State Tracking
Abstract
Dialogue state tracking for multi-domain dialogues is challenging because the model should be able to track dialogue states across multiple domains and slots. Past studies had its limitations in that they did not factor in the relationship among different domain-slot pairs. Although recent approaches did support relationship modeling among the domain-slot pairs, they did not leverage a pre-trained language model, which has improved the performance of numerous natural language tasks, in the encoding process. Our approach fills the gap between these previous studies. We propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. Inspired by the way the special [CLS] token in BERT is used to aggregate the information of the whole sequence, we use multiple special tokens for each domain-slot pair that encodes information corresponding to its domain and slot. The special tokens are run together with the dialogue context through the pre-trained language encoder, which effectively models the relationship among different domain-slot pairs. Our experimental results show that our model achieves state-of-the-art performance on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset.
1 INTRODUCTION
A task-oriented dialogue system is designed to help humans solve tasks by understanding their needs and providing relevant information accordingly. For example, such a system may assist its user with making a reservation at an appropriate restaurant by understanding the user’s needs for having a nice dinner. It can also recommend an attraction site to a travelling user, accommodating the user’s specific preferences. Dialogue State Tracking (DST) is a core component of these taskoriented dialogue systems, which aims to identify the state of the dialogue between the user and the system. DST represents the dialogue state with triplets of the following items: a domain, a slot, a value. A set of {restaurant, price range, cheap}, or of {train, arrive-by, 7:00 pm} are examples of such triplets. Fig. 1 illustrates an example case of the dialogue state during the course of the conversation between the user and the system. Since a dialogue continues for multiple turns of utterances, the DST model should successfully predict the dialogue state at each turn as the conversation proceeds. For multi-domain conversations, the DST model should be able to track dialogue states across different domains and slots.
Past research on multi-domain conversations used a placeholder in the model to represent domainslot pairs. A domain-slot pair is inserted into the placeholder in each run, and the model runs repeatedly until it covers all types of the domain-slot pairs. (Wu et al., 2019; Zhang et al., 2019; Lee et al., 2019). A DST model generally uses an encoder to extract information from the dialogue context that is relevant to the dialogue state. A typical input for a multi-domain DST model comprises a sequence of the user’s and the system’s utterances up to the turn t, Xt, and the domain-slot information for domain i and slot j, DiSj . In each run, the model feeds the input for a given domain-slot pair through the encoder.
fencoder(Xt, DiSj) for i = 1, · · · , n, j = 1, · · · ,m, (1)
where n and m is the number of domains and slots, respectively. However, because each domain-slot pair is modeled independently, the relationship among the domain-slot pairs can not be learned. For example, if the user first asked for a hotel in a certain place and later asked for a restaurant near that hotel, sharing the information between {hotel, area} and {restaurant, area} would help the model recognize that the restaurant should be in the same area as the hotel.
Recent approaches address these issues by modeling the dialogue state of every domain-slot pair in a single run, given a dialogue context (Chen et al., 2020; Le et al., 2019). This approach can be represented as follows:
fencoder(Xt, D1S1, · · · , DnSm). (2)
Because the encoder receives all of the domain-slot pairs, the model can factor in the relationship among the domain-slot pairs through the encoding process. For the encoder, these studies used models that are trained from scratch, without pre-training. However, since DST involves natural language text for the dialogue context, using a pre-trained language model can help improve the encoding process. Several studies used BERT (Devlin et al., 2019), a pre-trained bidirectional language model, for encoding the dialogue context (Zhang et al., 2019; Lee et al., 2019; Chao & Lane, 2019; Gao et al., 2019), but did not model the dependencies among different domain-slot pairs. Our approach fills the gap between these previous studies. In this work, we propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. We modify the input structure of BERT, specifically the special token part of it, to adjust it for multi-domain DST.
The [CLS] token of BERT (Devlin et al., 2019) is expected to encode the aggregate sequence representation as it runs through BERT, which is used for various downstream tasks such as sentence classification or question answering. This [CLS] token can also be used as an aggregate representation for a given dialogue context. However, in a multi-domain dialogue, a single [CLS] token has to store information for different domain-slot pairs at the same time. In this respect, we propose to use multiple special tokens, one for each domain-slot pair. Using a separate special token for each domain-slot pair is more effective in storing information for different domains and slots since each token can concentrate on its corresponding domain and slot. We consider two different ways to represent such tokens: DS-merge and DS-split. DS-merge employs a single token to represent a single domain-slot pair. For example, to represent a domain-slot pair of {restaurant, area}, we use a special token DS(restaurant,area). DS-split, on the other hand, employs tokens separately for the domain and slot and then merges them into one to represent a domain-slot pair. For {restaurant, area}, the domain token Drestaurant and the slot token Sarea. is computed separately and then merged. We use {DS}merge and {DS}split to represent the special tokens for DS-merge or DS-split, respectively. Unless it is absolutely necessary to specify whether the tokens are from DS-merge or DS-split, we’ll refer to the DS-produced tokens as {DS} tokens, without special distinction, in our descriptions forward. The {DS} tokens, after being encoded by the pre-trained language encoder along with the dialogue context, is used to predict its corresponding domain-slot value for a given dialogue context.
2 RELATED WORKS
Recent work on dialogue state tracking can be largely divided into two groups according to how the slot-values are predicted: fixed-vocabulary and open-vocabulary. The fixed-vocabulary approach, also known as the picklisted-based approach, uses a classification module to predict the dialogue state for each slot from a pre-defined set of candidate values (Zhong et al., 2018; Nouri & HosseiniAsl, 2018; Ramadan et al., 2018; Eric et al., 2019; Lee et al., 2019; Chen et al., 2020). The openvocabulary approach generates the dialogue state for each domain-slot pair either by using a generative decoder to generate text (Wu et al., 2019; Hosseini-Asl et al., 2020) or by extracting text spans from the dialogue history (Gao et al., 2019; Goel et al., 2019; Heck et al., 2020). There is also an approach to use both picklist-based and span-based methods according to the slot type (Zhang et al., 2019).
For models that deal with multi-domain dialogue, how they deal with different domain-slot pairs is another way to divide them. The first approach encodes the dialogue context independent of the domain-slot pairs and uses separate modules for each domain-slot pair (Eric et al., 2019; Gao et al., 2019; Goel et al., 2019; Heck et al., 2020). The second approach encodes the dialogue context using the domain-slot pair information as the prefix and run the encoder multiple times (Nouri & Hosseini-Asl, 2018; Wu et al., 2019). Other approaches encode the dialogue context independently but merges it with domain-slot pair information later with a separate fusion module (Zhong et al., 2018; Ramadan et al., 2018; Lee et al., 2019). However, none of these models are able to model the relationship among different domain-slot pairs because there is no module that enables the interaction between them.
(Le et al., 2019) and (Chen et al., 2020) directly models the relationship among different domainslot pairs. (Le et al., 2019) uses a Fertility decoder to learn potential dependencies across domainslot pairs, but without using a pre-trained language model. Also, their model requires additional data such as system action and delexicalized system responses for its performance. (Chen et al., 2020) also explicitly models the relationship among different domain-slot pairs by using a Graph Attention Network (GAT) (Veličković et al., 2018). Schema graphs, which is the relation graph between domains and slots, are utilized for connecting edges in the GAT. Our work is different from these works in that we leverage the power of a pre-trained language encoder for directly modeling the dependencies among different domain-slot pairs.
(Hosseini-Asl et al., 2020) takes a different approach from the others by using multi-task learning that encompasses DST as well as action and response generation with a generative language model GPT-2 (Radford et al., 2019). However, since our work is focused on DST, we consider the model that is trained on DST only. In the decoding process, dialogue states for different domain-slot pairs are sequentially generated.
3 PROPOSED METHOD
Our model is composed of three parts. The first is the domain-slot-context (DSC) encoder, which encodes the dialogue context along with the special tokens representing domain-slot pairs. Next is slot-gate classifier, which is a preliminary classifier that predicts whether each domain-slot pair is relevant to the dialogue context. The adopted the concept of the slot-gate classifier from (Wu et al., 2019) and made adjustments to apply to our model. The last is the slot value classifier for predicting the value for each domain-slot pair among the candidate values.
In the following descriptions, we assume a dialogue context with a total of T turns. The task is to predict the dialogue state, which are {domain, slot, value} triplets for all domain-slot pairs, for every turn t = 1, · · · , T , using the dialogue context until each turn. Section 3 show the overview of our proposed model.
3.1 DOMAIN-SLOT-CONTEXT ENCODER
The main structure of our model is the DSC encoder, which uses a pre-trained language to encode the dialogue context along with {DS} tokens. For the pre-trained language encoder, we used ALBERT (Lan et al., 2019) due to its strong performance on numerous natural language understanding tasks while having fewer parameters compared to other BERT-style encoders. {DS} tokens work like
the [CLS] token for BERT, encoding information corresponding to its domain-slot pair (DS-merge) or domain and slot (DS-split). The set of special tokens for each layout are shown in Eq. (3) and Eq. (4), respectively. In DS-merge, we used special tokens for each individual domain-slot pair. If there are many domain-slot pairs, using this layout can increase the number of special tokens as each domain-slot pair requires a separate special token. In DS-split, we used separate tokens for the domain and slot. To represent a domain-slot pair, we merged the corresponding tokens from each domain and slot by concatenating them. This promotes modeling compositionality, since the same slot token can be used for different domains. These {DS} tokens and the dialogue context are processed through the DSC encoder, which results in each token in {DS} being encoded with contextualized representations according to its domain and slot.
{DS}merge = {DS(domain(1),slot(1)), · · · , DS(domain(n),slot(m))} (3) {DS}split = {Ddomain(1) , · · · , Ddomain(n) , Sslot(1) , · · · , Sslot(m)} (4)
Fig. 3 shows the input representation of the DSC encoder. The sequence begins with {DS} tokens. The special token [CLS] follows, which encodes the overall information of the dialogue context. For the dialogue context, we added a special token [SEPu] to separate each user or system utterance, which is added at the end of each utterance from the user or system. The input ends with a special token [SEP ] as the end-of-sequence token.
4 types of embeddings are summed up to represent each token embedding. We used the pre-trained word embedding of ALBERT, except for the {DS} tokens, which are randomly initialized. We introduced the token type embedding to differentiate the {DS} tokens, user utterances tokens, and system utterances tokens. For DS-merge, we used a single token type embedding to represent a domain-slot pair, whereas for DS-split, we used two token type embeddings, one for the domain and the other for the slot. We did not apply this embedding for the [CLS] token. Position embeddings are also employed from ALBERT, but the index of the positional embedding starts from the [CLS] token. We did not use the positional embedding for the {DS} tokens as the order within those tokens is meaningless. Lastly, the segment embedding from ALBERT was used to represent the whole sequence as a single segment, which is the default segment embedding of ALBERT.
DSC encoder encodes contextualized embeddings for every input token. However, for the slotgate classifier and slot-value classifier, we only use the special token outputs of the DSC encoder ([CLS] token and {DS} tokens). This is formally defined as follows for DS-merge and DS-split, respectively, for turn t:
D̂S(1,1), · · · , D̂S(n,m), ĈLS = DSCencoder([{DS}merge, CLS,Xt, SEP ]), (5)
D̂1, · · · , D̂n, Ŝ1 · · · , Ŝm, ĈLS = DSCencoder([{DS}split, CLS,Xt, SEP ]), (6)
where Xt represents the dialogue context of (S1, SEPuU1, SEPu, · · · , St, SEPu, U t, SEPu). U t and St represents the utterance for the tth turn for the user and system respectively. The {DS}
tokens and [CLS] token with the hat notation ̂ represents the encoded output of the DSC encoder for those special tokens. They are vectors of Rd, where d is the hidden dimension of ALBERT.
3.2 SLOT-GATE CLASSIFIER
For the slot-gate classifier, we use the DSC encoder output of the {DS} tokens for each domainslot pair to predict whether it is relevant to the dialogue or not. In previous methods, gating used categories of {prediction, dontcare, none}, where prediction means a slot value is not dontcare or none and dontcare means that the predicted slot value is dontcare and none means that the domain-slot is non-relevant. The label for slot-gates are made from the slot-values. However, the performance for the dontcare category was far inferior to the other two categories, so we dismissed the dontcare category and only used {prediction, none}. In our preliminary models with ALBERT large-v2, the prediction and recall for dontcare was 48.87% and 17.21%, respectively. The precision and recall for none showed 98.91%, 99.45% and prediction 96.16%, 94.93%, respectively. In this setting, the dontcare category is included in prediction. For DS-merge, the slot-gate classifier predicts the value using the domain-slot pair special token. For the domain-slot pair of domain i and slot j, the slot-gate classifier output for DS-merge is
GateDiSj = sigmoid ( WGDS(i,j) D̂S(i,j) ) , (7)
where WGDiSj ∈ R 1×d. For DS-split, the slot-gate classifier uses the concatenated output of the corresponding domain and slot token. Similarly, for the same domain-slot pair, the slot-gate classifier output for DS-split is
GateDiSj = sigmoid ( WG(Di,Sj) [ D̂i|Ŝj ]) , (8)
where | represents concatenation of vectors and WG(Di,Sj) ∈ R 1×2d. The loss objective for the gate classification is as follows.set
Lgate = ∑
(i,j)∈DS
BinaryCrossEntropy ( ygateDiSj , GateDiSj ) , (9)
where DS refers to the set of all domain-slot pairs and ygateDiSj is the binary slot-gate label for domain i and slot j. If the domain-slot is predicted to none, the corresponding output of the slot-value classifier is changed into none regardless of the prediction of the slot-value classifier.
3.3 SLOT-VALUE CLASSIFIER
We employ the fixed-vocabulary based classification method for predicting slot values. As in (Zhang et al., 2019), the candidate-value list for each domain-slot pair was constructed by using the values from the training dataset, rather than using the incomplete ontology from the dataset. The [CLS] token is concatenated with each token from {DS}, and used as the input to the slot-value classifier for each domain-slot pair. The slot-value classifier output of domain i and slot j for DS-merge is as follows:
V alueDiSj = softmax ( WVDS(i,j) [ D̂S(i,j)|ĈLS ]) , (10)
where WVDS(i,j) ∈ R nDiSj × 2d and nDiSj is the number of candidate values for domain i and slot j. Similarly, for DS-split, the slot-value classifier output is
V alueDiSj = softmax ( WV(Di,Sj) [ D̂i|Ŝj |ĈLS ]) , (11)
where WV(Di,Sj ) ∈ R nDiSj × 3d. The loss objective for the slot-value classification is as follows:
Lvalue = ∑
(i,j)∈DS
CrossEntropy(yvalueDiSj , V alueDiSj ), (12)
where yvalueDiSj is the label for domain i and slot j.
3.4 TOTAL OBJECTIVE FUNCTION
The DSC encoder, slot-gate classifier and slot-value classifier is jointly trained under the total objective function below.
Ltotal = Lgate + Lvalue (13)
4 EXPERIMENT SETUP AND RESULTS
We evaluate our model using the joint goal accuracy, which considers a model prediction to be correct when the prediction jointly matches the ground truth values for all domain-slot pairs, given a dialogue context.
4.1 DATASET
We use the MultiWOZ-2.1 (Eric et al., 2019) and MultiWOZ-2.2 dataset (Zang et al., 2020), both of which fixed noisy annotations and dialogue utterances of the MultiWOZ 2.0 dataset (Budzianowski et al., 2018). The dataset contains 7 domains and over 10,000 dialogues. We follow the previous studies and use 5 domains (train, restaurant, hotel, taxi, attraction) with 30 domain-slot pairs. The other two domains (police, hospital) have little data and do not appear in the test dataset. For MultiWOZ-2.1, we follow the pre-processing explained in (Wu et al., 2019). For MultiWOZ-2.2, we use the raw data as given without any pre-processing.
4.2 SETUP
For the pre-trained language encoder, we used ALBERT(Lan et al., 2019) from HuggingFace (Wolf et al., 2019) in Pytorch (Paszke et al., 2019). We used the xxlarge-v2 version of ALBERT for the main experiment and compare other versions (base-v2, large-v2) in the analysis section. We also compared RoBERTa (Liu et al., 2019) to generalizability of our model. The optimizer was AdamW (Loshchilov & Hutter, 2018) with a learning rate of 1e−5 for ALBERT-xlarge-v2, ALBERT-xxlargev2 and RoBERTa-large and 5e−5 for ALBERT-base-v2, ALBERT-large-v2 and RoBERTa-base. We applied linear warm-up followed by linear decay for the learning rate. We trained all models with the effective batch size of 32, using gradient accumulation for bigger ALBERT models. Models were
selected based on their joint goal accuracy on the validation data split. Only the training data was used to build the labels for each domain-slot pair. We used two NVIDIA V100 for our training. The original ALBERT was pre-trained with a sequence length of up to 512 tokens. However, dialogues that are longer than 512 tokens exists in the data. Usually, the standard procedure for this situation is to truncate the sequence up to 512 tokens and discard the remaining tokens. However, to cover dialogues longer than 512 tokens that are in the dataset, we resized the positional embedding to cover a maximum length of the dialogue. We preserved the original pre-trained position embedding for positions indices up to 512 and randomly initialized the remaining position indices. This method showed better results than limiting the maximum sequence length to 512. We plan to release our code on Github.
4.3 RESULTS
Table 1 shows the joint goal accuracy of our model compared to previous methods. Both of our models show better performance among models without any additional supervision other than the dialogue context and domain-slot pair labels. Especially, the DS-split, ALBERT-xxlarge-v2 version of our proposed model achieves state-of-the-art result on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset, without any form of extra supervision. However, in smaller models, The model with DSsplit shows better results than the model with DS-merge. This shows that in models with enough capacity, the slot-sharing of DS-split was more effective. However, this was not the case for smaller ALBERT models, which is explained in Section 4.4.2. This is important in that scalability is much better for DS-split than DS-merge, as many slots can be shared across different domains, reducing the number of special tokens to be used. We show the individual domain-slot accuracy in Appendix A.2, Table 4.
4.4 ANALYSIS
In this section, we show that relationship modeling among different domain-slot pairs is indeed the key factor of our proposed model by running ablation studies. Also, we compare the effect of the size and type of the pre-trained language encoder in terms of performance.
4.4.1 RELATIONSHIP MODELING AMONG DIFFERENT DOMAIN-SLOT PAIRS
First, we did not use any {DS} tokens and only used the CLS token. Because there are no dedicated special tokens for each domain-slot pair, the performance is very poor as shown in ’None’ row in Table 2. This shows that our approach to introduce {DS} is effective. Next, to evaluate the effect of relationship modeling among different domain-slot pairs, we blocked the attention among different {DS} tokens during the encoding process, which restricts direct interaction among {DS} tokens. Table 2 shows that without the relationship modeling, our model performance deteriorates by a substantial amount. This validates our idea that relationship modeling is the crucial factor for our approach.
In the Appendix A.1, we show some examples of wrong predictions that models without direct relationship modeling has made.
4.4.2 SIZE AND TYPE OF THE PRE-TRAINED LANGUAGE ENCODER
We compared ALBERT and RoBERTa (Liu et al., 2019) and various model sizes within those pretrained language encoders. Table 3 shows the result for different versions of the pre-trained language encoders. For ALBERT, a bigger language model shows better results as is shown in various downstream tasks that ALBERT was evaluated on (Lan et al., 2019). Except for ALBERT-xx-large, all other configurations show that DS-merge shows better performance than DS-split. Based on the drastic increase in performance with xx-large, we presume that the high model complexity of ALBERTxx-large enabled {DS}split tokens to effectively encode information and make slot-sharing to work. In smaller models, this slot-sharing might not have been as effective due to their smaller encoding capacity. Also, concatenation, which was used for merging domain and slot embeddings in DS-split, might not have been enough for fully representing the information for the domain-slot pair in smaller models. RoBERTa also shows similar results with bigger models showing stronger performance.
4.4.3 LEARNING CURVE
Fig. 4 shows the learning curve of the ALBERT-xxlarge-v2 on the MultiWOZ-2.2 dataset. The joint goal accuracy steadily increases after the slot-value loss plateaus.
5 CONCLUSION
In this paper, we propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. We introduced two methods to represent special tokens for each domain-slot pair: DS-merge and DS-split. These tokens work like the [CLS] token for BERT, encoding information corresponding to its domain-slot pair (DS-merge) or domain and slot (DS-split). These special tokens are run together with the dialogue context through the pre-trained language encoder, which enables modeling the relationship among different domain-slot pairs. Experimental results show that our model achieves state-of-the-art performance on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset. The ablation experiments show that the relationship modeling among different domain-slot pairs is the key element of our model. Also, we showed that larger pre-trained language encoders improves performance. We hope to advance our research by finding ways to effectively apply our model towards the open-vocabulary approach, which will enable better generalization for candidate values that are outside of the training data.
ACKNOWLEDGMENTS
This research was supported and funded by the Korean National Police Agency. [Pol-Bot Development for Conversational Police Knowledge Services / PR09-01-000-20]
A APPENDIX
A.1 RELATIONSHIP MODELING EXAMPLES
A.1.1 EXAMPLE 1
Fig. 5 shows an example of a wrong prediction that the model without domain-slot relationship modeling makes. The value for {taxi, departure} is not explicitly mentioned in the dialogue context. However, Our full model correctly predicts the value for {taxi, departure}, which can be inferred from the dialogue context and {hotel, name}. However, the model without relationship modeling fails to predict the correct value for {taxi, departure}. User: i am staying in cambridge soon and would like to stay at a and b guest house.
System: sure, how many days and how many people?
User: we are staying 6 people for 4 nights starting from tuesday. i need the reference number
System: your booking is successful! your reference number is iigra0mi. do you need anything else?
User: yeas, what to recommend if i want to see good architecture in the west part of town?
System: unfortunately there is no good architecture on the west end but i can look in other parts of town if you want
User: what about a museum?
System: what part of town there are none in the west.
User: there are no museums in the west at all?
System: sorry about that, there are actually 7 in that area.
User: great, can i get the postcode, entrance fee and address of 1 of them?
System: cafe jello gallery has a free entrance fee. the address is cafe jello gallery, 13 magdalene street and the post code is cb30af. can i help you with anything else?
User: yes please. i need a taxi to commute.
System: when would you like to leave and arrive?
User: i would like to get to the gallery by 13:45, please.
System: sure, lookout for a blue volvo the contact number is 07941424083. can i help with anything else?
User: that is all for now. thank you so much
A.1.2 EXAMPLE 2
Fig. 6 also shows an example of a wrong prediction that the model without domain-slot relationship modeling makes. The value for {train, day} is not explicitly mentioned in the dialogue context. In a similar manner from the example above, it can be referred from the {restaurant, book day}. User: i would like to find a particular restaurant in cambridge. the name of the restaurant is restaurant 2 two. could you give me the location?
System: restaurant 2 two is nice french restaurant located at 22 chesterton road chesterton. would like me to book you a table?
User: that would be great. i need it for 8 on friday.
System: do you have a time preference?
User: yes at 11:15 if that is not available i can do 10:15
System: the booking for 10:15 was successful they will reserve the table for 15 minutes. the reference number is 6b5z7vj5.
User: thanks. can you help me find a train, too? i want to leave cambridge some time after 12:15.
A.2 INDIVIDUAL SLOT ACCURACY
Table 4 shows the individual domain-slot accuracy for the ALBERT-xxlarge-v2 model on the MultiWOZ-2.2 dataset. | 1. What is the focus of the paper regarding Dialogue State Tracking?
2. What are the strengths of the proposed approach, particularly in terms of encoding and pre-trained representations?
3. What are the weaknesses of the paper, especially regarding scalability and design decisions?
4. Do you have any concerns about the approach, such as the lack of qualitative analysis and limited training time/accuracy tradeoff discussion?
5. Are there any suggestions for additional analyses or improvements that could enhance the paper's contributions? | Review | Review
Summary: This paper showcases how pre-training can help with Dialogue State Tracking. The authors explicitly model the relationship between domain-slot pairs. With their encoding and using strong pre-trained initializations they are able to improve the joint goal accuracy by almost 1.5 points which is impressive.
Reasons for score: This is a very well written paper and will be a good resource for people working on the task of Dialogue State Tracking. The authors show how they can model relationships between domain-slot pairs and how they can encode them effectively using pre-trained representations. I am hoping that the authors can address some of the cons during the rebuttal period.
Pros:
Good dialogue representation which helps with the task of state tracking
Simple model consisting of encoders and 2 classifiers which are well explained.
Clear ablation study showing the value of 1) pre-training and 2) modeling relationship between domain-slot values
Cons:
This approach, like other popular approaches, suffers from the problem of having a fixed output vocabulary for slot values - hence limiting its scalability. While this cannot be addressed in this work, this is a drawback of this approach.
Some of the design decisions are stated but not well explained
Only one pre-training method compared
Authors mention they drop "dontcare" from slot gating but don't show the affect with or without it.
Not much details on the setup and how it was trained.
Not much qualitative analysis.
Please address and clarify the cons above
Typos/Areas for improvement:
Section 3.2 and 3.3 can be shortened a lot. I would suggest showing more analysis.
More examples of type of mistakes fixed.
Which turn in the dialogue does the error decrease the most.
How much is the training time/ accuracy tradeoff
Adding another layer to make DS-split work should be trivial, there is no reason to leave that to future work. Could you show how the results look with that?
Updating score based on authors' response. |
ICLR | Title
Domain-slot Relationship Modeling using a Pre-trained Language Encoder for Multi-Domain Dialogue State Tracking
Abstract
Dialogue state tracking for multi-domain dialogues is challenging because the model should be able to track dialogue states across multiple domains and slots. Past studies had its limitations in that they did not factor in the relationship among different domain-slot pairs. Although recent approaches did support relationship modeling among the domain-slot pairs, they did not leverage a pre-trained language model, which has improved the performance of numerous natural language tasks, in the encoding process. Our approach fills the gap between these previous studies. We propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. Inspired by the way the special [CLS] token in BERT is used to aggregate the information of the whole sequence, we use multiple special tokens for each domain-slot pair that encodes information corresponding to its domain and slot. The special tokens are run together with the dialogue context through the pre-trained language encoder, which effectively models the relationship among different domain-slot pairs. Our experimental results show that our model achieves state-of-the-art performance on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset.
1 INTRODUCTION
A task-oriented dialogue system is designed to help humans solve tasks by understanding their needs and providing relevant information accordingly. For example, such a system may assist its user with making a reservation at an appropriate restaurant by understanding the user’s needs for having a nice dinner. It can also recommend an attraction site to a travelling user, accommodating the user’s specific preferences. Dialogue State Tracking (DST) is a core component of these taskoriented dialogue systems, which aims to identify the state of the dialogue between the user and the system. DST represents the dialogue state with triplets of the following items: a domain, a slot, a value. A set of {restaurant, price range, cheap}, or of {train, arrive-by, 7:00 pm} are examples of such triplets. Fig. 1 illustrates an example case of the dialogue state during the course of the conversation between the user and the system. Since a dialogue continues for multiple turns of utterances, the DST model should successfully predict the dialogue state at each turn as the conversation proceeds. For multi-domain conversations, the DST model should be able to track dialogue states across different domains and slots.
Past research on multi-domain conversations used a placeholder in the model to represent domainslot pairs. A domain-slot pair is inserted into the placeholder in each run, and the model runs repeatedly until it covers all types of the domain-slot pairs. (Wu et al., 2019; Zhang et al., 2019; Lee et al., 2019). A DST model generally uses an encoder to extract information from the dialogue context that is relevant to the dialogue state. A typical input for a multi-domain DST model comprises a sequence of the user’s and the system’s utterances up to the turn t, Xt, and the domain-slot information for domain i and slot j, DiSj . In each run, the model feeds the input for a given domain-slot pair through the encoder.
fencoder(Xt, DiSj) for i = 1, · · · , n, j = 1, · · · ,m, (1)
where n and m is the number of domains and slots, respectively. However, because each domain-slot pair is modeled independently, the relationship among the domain-slot pairs can not be learned. For example, if the user first asked for a hotel in a certain place and later asked for a restaurant near that hotel, sharing the information between {hotel, area} and {restaurant, area} would help the model recognize that the restaurant should be in the same area as the hotel.
Recent approaches address these issues by modeling the dialogue state of every domain-slot pair in a single run, given a dialogue context (Chen et al., 2020; Le et al., 2019). This approach can be represented as follows:
fencoder(Xt, D1S1, · · · , DnSm). (2)
Because the encoder receives all of the domain-slot pairs, the model can factor in the relationship among the domain-slot pairs through the encoding process. For the encoder, these studies used models that are trained from scratch, without pre-training. However, since DST involves natural language text for the dialogue context, using a pre-trained language model can help improve the encoding process. Several studies used BERT (Devlin et al., 2019), a pre-trained bidirectional language model, for encoding the dialogue context (Zhang et al., 2019; Lee et al., 2019; Chao & Lane, 2019; Gao et al., 2019), but did not model the dependencies among different domain-slot pairs. Our approach fills the gap between these previous studies. In this work, we propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. We modify the input structure of BERT, specifically the special token part of it, to adjust it for multi-domain DST.
The [CLS] token of BERT (Devlin et al., 2019) is expected to encode the aggregate sequence representation as it runs through BERT, which is used for various downstream tasks such as sentence classification or question answering. This [CLS] token can also be used as an aggregate representation for a given dialogue context. However, in a multi-domain dialogue, a single [CLS] token has to store information for different domain-slot pairs at the same time. In this respect, we propose to use multiple special tokens, one for each domain-slot pair. Using a separate special token for each domain-slot pair is more effective in storing information for different domains and slots since each token can concentrate on its corresponding domain and slot. We consider two different ways to represent such tokens: DS-merge and DS-split. DS-merge employs a single token to represent a single domain-slot pair. For example, to represent a domain-slot pair of {restaurant, area}, we use a special token DS(restaurant,area). DS-split, on the other hand, employs tokens separately for the domain and slot and then merges them into one to represent a domain-slot pair. For {restaurant, area}, the domain token Drestaurant and the slot token Sarea. is computed separately and then merged. We use {DS}merge and {DS}split to represent the special tokens for DS-merge or DS-split, respectively. Unless it is absolutely necessary to specify whether the tokens are from DS-merge or DS-split, we’ll refer to the DS-produced tokens as {DS} tokens, without special distinction, in our descriptions forward. The {DS} tokens, after being encoded by the pre-trained language encoder along with the dialogue context, is used to predict its corresponding domain-slot value for a given dialogue context.
2 RELATED WORKS
Recent work on dialogue state tracking can be largely divided into two groups according to how the slot-values are predicted: fixed-vocabulary and open-vocabulary. The fixed-vocabulary approach, also known as the picklisted-based approach, uses a classification module to predict the dialogue state for each slot from a pre-defined set of candidate values (Zhong et al., 2018; Nouri & HosseiniAsl, 2018; Ramadan et al., 2018; Eric et al., 2019; Lee et al., 2019; Chen et al., 2020). The openvocabulary approach generates the dialogue state for each domain-slot pair either by using a generative decoder to generate text (Wu et al., 2019; Hosseini-Asl et al., 2020) or by extracting text spans from the dialogue history (Gao et al., 2019; Goel et al., 2019; Heck et al., 2020). There is also an approach to use both picklist-based and span-based methods according to the slot type (Zhang et al., 2019).
For models that deal with multi-domain dialogue, how they deal with different domain-slot pairs is another way to divide them. The first approach encodes the dialogue context independent of the domain-slot pairs and uses separate modules for each domain-slot pair (Eric et al., 2019; Gao et al., 2019; Goel et al., 2019; Heck et al., 2020). The second approach encodes the dialogue context using the domain-slot pair information as the prefix and run the encoder multiple times (Nouri & Hosseini-Asl, 2018; Wu et al., 2019). Other approaches encode the dialogue context independently but merges it with domain-slot pair information later with a separate fusion module (Zhong et al., 2018; Ramadan et al., 2018; Lee et al., 2019). However, none of these models are able to model the relationship among different domain-slot pairs because there is no module that enables the interaction between them.
(Le et al., 2019) and (Chen et al., 2020) directly models the relationship among different domainslot pairs. (Le et al., 2019) uses a Fertility decoder to learn potential dependencies across domainslot pairs, but without using a pre-trained language model. Also, their model requires additional data such as system action and delexicalized system responses for its performance. (Chen et al., 2020) also explicitly models the relationship among different domain-slot pairs by using a Graph Attention Network (GAT) (Veličković et al., 2018). Schema graphs, which is the relation graph between domains and slots, are utilized for connecting edges in the GAT. Our work is different from these works in that we leverage the power of a pre-trained language encoder for directly modeling the dependencies among different domain-slot pairs.
(Hosseini-Asl et al., 2020) takes a different approach from the others by using multi-task learning that encompasses DST as well as action and response generation with a generative language model GPT-2 (Radford et al., 2019). However, since our work is focused on DST, we consider the model that is trained on DST only. In the decoding process, dialogue states for different domain-slot pairs are sequentially generated.
3 PROPOSED METHOD
Our model is composed of three parts. The first is the domain-slot-context (DSC) encoder, which encodes the dialogue context along with the special tokens representing domain-slot pairs. Next is slot-gate classifier, which is a preliminary classifier that predicts whether each domain-slot pair is relevant to the dialogue context. The adopted the concept of the slot-gate classifier from (Wu et al., 2019) and made adjustments to apply to our model. The last is the slot value classifier for predicting the value for each domain-slot pair among the candidate values.
In the following descriptions, we assume a dialogue context with a total of T turns. The task is to predict the dialogue state, which are {domain, slot, value} triplets for all domain-slot pairs, for every turn t = 1, · · · , T , using the dialogue context until each turn. Section 3 show the overview of our proposed model.
3.1 DOMAIN-SLOT-CONTEXT ENCODER
The main structure of our model is the DSC encoder, which uses a pre-trained language to encode the dialogue context along with {DS} tokens. For the pre-trained language encoder, we used ALBERT (Lan et al., 2019) due to its strong performance on numerous natural language understanding tasks while having fewer parameters compared to other BERT-style encoders. {DS} tokens work like
the [CLS] token for BERT, encoding information corresponding to its domain-slot pair (DS-merge) or domain and slot (DS-split). The set of special tokens for each layout are shown in Eq. (3) and Eq. (4), respectively. In DS-merge, we used special tokens for each individual domain-slot pair. If there are many domain-slot pairs, using this layout can increase the number of special tokens as each domain-slot pair requires a separate special token. In DS-split, we used separate tokens for the domain and slot. To represent a domain-slot pair, we merged the corresponding tokens from each domain and slot by concatenating them. This promotes modeling compositionality, since the same slot token can be used for different domains. These {DS} tokens and the dialogue context are processed through the DSC encoder, which results in each token in {DS} being encoded with contextualized representations according to its domain and slot.
{DS}merge = {DS(domain(1),slot(1)), · · · , DS(domain(n),slot(m))} (3) {DS}split = {Ddomain(1) , · · · , Ddomain(n) , Sslot(1) , · · · , Sslot(m)} (4)
Fig. 3 shows the input representation of the DSC encoder. The sequence begins with {DS} tokens. The special token [CLS] follows, which encodes the overall information of the dialogue context. For the dialogue context, we added a special token [SEPu] to separate each user or system utterance, which is added at the end of each utterance from the user or system. The input ends with a special token [SEP ] as the end-of-sequence token.
4 types of embeddings are summed up to represent each token embedding. We used the pre-trained word embedding of ALBERT, except for the {DS} tokens, which are randomly initialized. We introduced the token type embedding to differentiate the {DS} tokens, user utterances tokens, and system utterances tokens. For DS-merge, we used a single token type embedding to represent a domain-slot pair, whereas for DS-split, we used two token type embeddings, one for the domain and the other for the slot. We did not apply this embedding for the [CLS] token. Position embeddings are also employed from ALBERT, but the index of the positional embedding starts from the [CLS] token. We did not use the positional embedding for the {DS} tokens as the order within those tokens is meaningless. Lastly, the segment embedding from ALBERT was used to represent the whole sequence as a single segment, which is the default segment embedding of ALBERT.
DSC encoder encodes contextualized embeddings for every input token. However, for the slotgate classifier and slot-value classifier, we only use the special token outputs of the DSC encoder ([CLS] token and {DS} tokens). This is formally defined as follows for DS-merge and DS-split, respectively, for turn t:
D̂S(1,1), · · · , D̂S(n,m), ĈLS = DSCencoder([{DS}merge, CLS,Xt, SEP ]), (5)
D̂1, · · · , D̂n, Ŝ1 · · · , Ŝm, ĈLS = DSCencoder([{DS}split, CLS,Xt, SEP ]), (6)
where Xt represents the dialogue context of (S1, SEPuU1, SEPu, · · · , St, SEPu, U t, SEPu). U t and St represents the utterance for the tth turn for the user and system respectively. The {DS}
tokens and [CLS] token with the hat notation ̂ represents the encoded output of the DSC encoder for those special tokens. They are vectors of Rd, where d is the hidden dimension of ALBERT.
3.2 SLOT-GATE CLASSIFIER
For the slot-gate classifier, we use the DSC encoder output of the {DS} tokens for each domainslot pair to predict whether it is relevant to the dialogue or not. In previous methods, gating used categories of {prediction, dontcare, none}, where prediction means a slot value is not dontcare or none and dontcare means that the predicted slot value is dontcare and none means that the domain-slot is non-relevant. The label for slot-gates are made from the slot-values. However, the performance for the dontcare category was far inferior to the other two categories, so we dismissed the dontcare category and only used {prediction, none}. In our preliminary models with ALBERT large-v2, the prediction and recall for dontcare was 48.87% and 17.21%, respectively. The precision and recall for none showed 98.91%, 99.45% and prediction 96.16%, 94.93%, respectively. In this setting, the dontcare category is included in prediction. For DS-merge, the slot-gate classifier predicts the value using the domain-slot pair special token. For the domain-slot pair of domain i and slot j, the slot-gate classifier output for DS-merge is
GateDiSj = sigmoid ( WGDS(i,j) D̂S(i,j) ) , (7)
where WGDiSj ∈ R 1×d. For DS-split, the slot-gate classifier uses the concatenated output of the corresponding domain and slot token. Similarly, for the same domain-slot pair, the slot-gate classifier output for DS-split is
GateDiSj = sigmoid ( WG(Di,Sj) [ D̂i|Ŝj ]) , (8)
where | represents concatenation of vectors and WG(Di,Sj) ∈ R 1×2d. The loss objective for the gate classification is as follows.set
Lgate = ∑
(i,j)∈DS
BinaryCrossEntropy ( ygateDiSj , GateDiSj ) , (9)
where DS refers to the set of all domain-slot pairs and ygateDiSj is the binary slot-gate label for domain i and slot j. If the domain-slot is predicted to none, the corresponding output of the slot-value classifier is changed into none regardless of the prediction of the slot-value classifier.
3.3 SLOT-VALUE CLASSIFIER
We employ the fixed-vocabulary based classification method for predicting slot values. As in (Zhang et al., 2019), the candidate-value list for each domain-slot pair was constructed by using the values from the training dataset, rather than using the incomplete ontology from the dataset. The [CLS] token is concatenated with each token from {DS}, and used as the input to the slot-value classifier for each domain-slot pair. The slot-value classifier output of domain i and slot j for DS-merge is as follows:
V alueDiSj = softmax ( WVDS(i,j) [ D̂S(i,j)|ĈLS ]) , (10)
where WVDS(i,j) ∈ R nDiSj × 2d and nDiSj is the number of candidate values for domain i and slot j. Similarly, for DS-split, the slot-value classifier output is
V alueDiSj = softmax ( WV(Di,Sj) [ D̂i|Ŝj |ĈLS ]) , (11)
where WV(Di,Sj ) ∈ R nDiSj × 3d. The loss objective for the slot-value classification is as follows:
Lvalue = ∑
(i,j)∈DS
CrossEntropy(yvalueDiSj , V alueDiSj ), (12)
where yvalueDiSj is the label for domain i and slot j.
3.4 TOTAL OBJECTIVE FUNCTION
The DSC encoder, slot-gate classifier and slot-value classifier is jointly trained under the total objective function below.
Ltotal = Lgate + Lvalue (13)
4 EXPERIMENT SETUP AND RESULTS
We evaluate our model using the joint goal accuracy, which considers a model prediction to be correct when the prediction jointly matches the ground truth values for all domain-slot pairs, given a dialogue context.
4.1 DATASET
We use the MultiWOZ-2.1 (Eric et al., 2019) and MultiWOZ-2.2 dataset (Zang et al., 2020), both of which fixed noisy annotations and dialogue utterances of the MultiWOZ 2.0 dataset (Budzianowski et al., 2018). The dataset contains 7 domains and over 10,000 dialogues. We follow the previous studies and use 5 domains (train, restaurant, hotel, taxi, attraction) with 30 domain-slot pairs. The other two domains (police, hospital) have little data and do not appear in the test dataset. For MultiWOZ-2.1, we follow the pre-processing explained in (Wu et al., 2019). For MultiWOZ-2.2, we use the raw data as given without any pre-processing.
4.2 SETUP
For the pre-trained language encoder, we used ALBERT(Lan et al., 2019) from HuggingFace (Wolf et al., 2019) in Pytorch (Paszke et al., 2019). We used the xxlarge-v2 version of ALBERT for the main experiment and compare other versions (base-v2, large-v2) in the analysis section. We also compared RoBERTa (Liu et al., 2019) to generalizability of our model. The optimizer was AdamW (Loshchilov & Hutter, 2018) with a learning rate of 1e−5 for ALBERT-xlarge-v2, ALBERT-xxlargev2 and RoBERTa-large and 5e−5 for ALBERT-base-v2, ALBERT-large-v2 and RoBERTa-base. We applied linear warm-up followed by linear decay for the learning rate. We trained all models with the effective batch size of 32, using gradient accumulation for bigger ALBERT models. Models were
selected based on their joint goal accuracy on the validation data split. Only the training data was used to build the labels for each domain-slot pair. We used two NVIDIA V100 for our training. The original ALBERT was pre-trained with a sequence length of up to 512 tokens. However, dialogues that are longer than 512 tokens exists in the data. Usually, the standard procedure for this situation is to truncate the sequence up to 512 tokens and discard the remaining tokens. However, to cover dialogues longer than 512 tokens that are in the dataset, we resized the positional embedding to cover a maximum length of the dialogue. We preserved the original pre-trained position embedding for positions indices up to 512 and randomly initialized the remaining position indices. This method showed better results than limiting the maximum sequence length to 512. We plan to release our code on Github.
4.3 RESULTS
Table 1 shows the joint goal accuracy of our model compared to previous methods. Both of our models show better performance among models without any additional supervision other than the dialogue context and domain-slot pair labels. Especially, the DS-split, ALBERT-xxlarge-v2 version of our proposed model achieves state-of-the-art result on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset, without any form of extra supervision. However, in smaller models, The model with DSsplit shows better results than the model with DS-merge. This shows that in models with enough capacity, the slot-sharing of DS-split was more effective. However, this was not the case for smaller ALBERT models, which is explained in Section 4.4.2. This is important in that scalability is much better for DS-split than DS-merge, as many slots can be shared across different domains, reducing the number of special tokens to be used. We show the individual domain-slot accuracy in Appendix A.2, Table 4.
4.4 ANALYSIS
In this section, we show that relationship modeling among different domain-slot pairs is indeed the key factor of our proposed model by running ablation studies. Also, we compare the effect of the size and type of the pre-trained language encoder in terms of performance.
4.4.1 RELATIONSHIP MODELING AMONG DIFFERENT DOMAIN-SLOT PAIRS
First, we did not use any {DS} tokens and only used the CLS token. Because there are no dedicated special tokens for each domain-slot pair, the performance is very poor as shown in ’None’ row in Table 2. This shows that our approach to introduce {DS} is effective. Next, to evaluate the effect of relationship modeling among different domain-slot pairs, we blocked the attention among different {DS} tokens during the encoding process, which restricts direct interaction among {DS} tokens. Table 2 shows that without the relationship modeling, our model performance deteriorates by a substantial amount. This validates our idea that relationship modeling is the crucial factor for our approach.
In the Appendix A.1, we show some examples of wrong predictions that models without direct relationship modeling has made.
4.4.2 SIZE AND TYPE OF THE PRE-TRAINED LANGUAGE ENCODER
We compared ALBERT and RoBERTa (Liu et al., 2019) and various model sizes within those pretrained language encoders. Table 3 shows the result for different versions of the pre-trained language encoders. For ALBERT, a bigger language model shows better results as is shown in various downstream tasks that ALBERT was evaluated on (Lan et al., 2019). Except for ALBERT-xx-large, all other configurations show that DS-merge shows better performance than DS-split. Based on the drastic increase in performance with xx-large, we presume that the high model complexity of ALBERTxx-large enabled {DS}split tokens to effectively encode information and make slot-sharing to work. In smaller models, this slot-sharing might not have been as effective due to their smaller encoding capacity. Also, concatenation, which was used for merging domain and slot embeddings in DS-split, might not have been enough for fully representing the information for the domain-slot pair in smaller models. RoBERTa also shows similar results with bigger models showing stronger performance.
4.4.3 LEARNING CURVE
Fig. 4 shows the learning curve of the ALBERT-xxlarge-v2 on the MultiWOZ-2.2 dataset. The joint goal accuracy steadily increases after the slot-value loss plateaus.
5 CONCLUSION
In this paper, we propose a model for multi-domain dialogue state tracking that effectively models the relationship among domain-slot pairs using a pre-trained language encoder. We introduced two methods to represent special tokens for each domain-slot pair: DS-merge and DS-split. These tokens work like the [CLS] token for BERT, encoding information corresponding to its domain-slot pair (DS-merge) or domain and slot (DS-split). These special tokens are run together with the dialogue context through the pre-trained language encoder, which enables modeling the relationship among different domain-slot pairs. Experimental results show that our model achieves state-of-the-art performance on the MultiWOZ-2.1 and MultiWOZ-2.2 dataset. The ablation experiments show that the relationship modeling among different domain-slot pairs is the key element of our model. Also, we showed that larger pre-trained language encoders improves performance. We hope to advance our research by finding ways to effectively apply our model towards the open-vocabulary approach, which will enable better generalization for candidate values that are outside of the training data.
ACKNOWLEDGMENTS
This research was supported and funded by the Korean National Police Agency. [Pol-Bot Development for Conversational Police Knowledge Services / PR09-01-000-20]
A APPENDIX
A.1 RELATIONSHIP MODELING EXAMPLES
A.1.1 EXAMPLE 1
Fig. 5 shows an example of a wrong prediction that the model without domain-slot relationship modeling makes. The value for {taxi, departure} is not explicitly mentioned in the dialogue context. However, Our full model correctly predicts the value for {taxi, departure}, which can be inferred from the dialogue context and {hotel, name}. However, the model without relationship modeling fails to predict the correct value for {taxi, departure}. User: i am staying in cambridge soon and would like to stay at a and b guest house.
System: sure, how many days and how many people?
User: we are staying 6 people for 4 nights starting from tuesday. i need the reference number
System: your booking is successful! your reference number is iigra0mi. do you need anything else?
User: yeas, what to recommend if i want to see good architecture in the west part of town?
System: unfortunately there is no good architecture on the west end but i can look in other parts of town if you want
User: what about a museum?
System: what part of town there are none in the west.
User: there are no museums in the west at all?
System: sorry about that, there are actually 7 in that area.
User: great, can i get the postcode, entrance fee and address of 1 of them?
System: cafe jello gallery has a free entrance fee. the address is cafe jello gallery, 13 magdalene street and the post code is cb30af. can i help you with anything else?
User: yes please. i need a taxi to commute.
System: when would you like to leave and arrive?
User: i would like to get to the gallery by 13:45, please.
System: sure, lookout for a blue volvo the contact number is 07941424083. can i help with anything else?
User: that is all for now. thank you so much
A.1.2 EXAMPLE 2
Fig. 6 also shows an example of a wrong prediction that the model without domain-slot relationship modeling makes. The value for {train, day} is not explicitly mentioned in the dialogue context. In a similar manner from the example above, it can be referred from the {restaurant, book day}. User: i would like to find a particular restaurant in cambridge. the name of the restaurant is restaurant 2 two. could you give me the location?
System: restaurant 2 two is nice french restaurant located at 22 chesterton road chesterton. would like me to book you a table?
User: that would be great. i need it for 8 on friday.
System: do you have a time preference?
User: yes at 11:15 if that is not available i can do 10:15
System: the booking for 10:15 was successful they will reserve the table for 15 minutes. the reference number is 6b5z7vj5.
User: thanks. can you help me find a train, too? i want to leave cambridge some time after 12:15.
A.2 INDIVIDUAL SLOT ACCURACY
Table 4 shows the individual domain-slot accuracy for the ALBERT-xxlarge-v2 model on the MultiWOZ-2.2 dataset. | 1. What is the focus of the paper regarding multidomain state-tracking models?
2. What are the strengths of the proposed approach, particularly its simplicity and effectiveness?
3. Do you have any concerns about the novelty of the idea and its similarity to prior works?
4. How does the reviewer assess the analysis section and the ablation study?
5. Are there any suggestions or questions regarding the experimental setup and the use of different model sizes?
6. Any further comments or recommendations for improving the paper? | Review | Review
[Summary] In this paper, the authors proposed a multidomain state-tracking model that leverages the relationship among different domain-slot pairs. This is done by leveraging the full-attention step over the [CLS] special token and by providing all the domain-slot pairs as a special token to a pre-trained language model (Figure 2 is very clear). To predict the value of the slot
D
i
,
j
, the author concatenates the representation of the [CLS] token, share among all the domain-slots, and the
D
i
,
j
, provided as input, and use a gating mechanism, by only using
D
i
,
j
representation, to decide whether require as value (i.e., prediction) or not (e.g. None). \
The authors experimented using ALBERT (Lan et al., 2019) as a pre-trained language model, on the well-known benchmark MultiWoZ 2.0 (Budzianowski et al., 2018) 2.1 (Eric et al., 2019). The authors studied different format to represent
D
i
,
j
DS-merge (i.e., one token per domain-slot) and DS-split (i.e., one token per slot and one per domain, thus more scalable). The reported performance is state-of-the-art at the time of submission.
[Pros]
The paper reads well and it is easy to follow for people working on Task-Oriented dialogue.
The proposed method is simple and effective, and it would be easy to reproduce.
[Cons]
The idea of using domain-pairs as input to a large pre-trained model is not novel (Wu et al., 2019; Zhang et al., 2019; Lee et al., 2019), as also pointed out by the authors, but the authors do not explicitly clarify this in the methodology section, leading the reader to believe that the domain-pairs is their own contribution. Same for the slot-gate (Wu et al., 2019)
The authors claim to learn relations between slots, but the analysis section is very thin and it just shows an ablation by masking the attention between the slot. Two points: why not just removing the [CLS] token instead of removing the attention, and why just using on ALBERTA large. For instance, the authors said "For this experiment, we used the ALBERT configuration of large-v2, for faster experimentation" which is contradictory since large-v2 is the slowest to run I guess. Can the authors show this ablation for all the model size?
Although, MWoZ is the current benchmark for DST in ToDs, there are also other datasets for this task that can be considered (e.g., Schema Guided Dialogue (SGD) (Rastogi et.al. 2019))
[Reason to Reject] The main contribution of this paper is very thin, adding the [CLS] token as input, and the main technical contribution is not well explored (missing an in-depth ablation).
[Reason to Accept] State-of-the-art performance at the submission time. To be noted, (Mehri et.al. 2020) reported better performance in MWoZ and other datasets, but this paper was released after the ICLR submission deadline.
[Question]
Can the authors show the ablation for all the model size?
[Suggestion]
Figure 4 is very hard to read. I suggest to better format the dialogue. |
ICLR | Title
DensePure: Understanding Diffusion Models for Adversarial Robustness
Abstract
Diffusion models have been recently employed to improve certified robustness through the process of denoising. However, the theoretical understanding of why diffusion models are able to improve the certified robustness is still lacking, preventing from further improvement. In this study, we close this gap by analyzing the fundamental properties of diffusion models and establishing the conditions under which they can enhance certified robustness. This deeper understanding allows us to propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. classifier). Given an (adversarial) input, DensePure consists of multiple runs of denoising via the reverse process of the diffusion model (with different random seeds) to get multiple reversed samples, which are then passed through the classifier, followed by majority voting of inferred labels to make the final prediction. This design of using multiple runs of denoising is informed by our theoretical analysis of the conditional distribution of the reversed sample. Specifically, when the data density of a clean sample is high, its conditional density under the reverse process in a diffusion model is also high; thus sampling from the latter conditional distribution can purify the adversarial example and return the corresponding clean sample with a high probability. By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model’s reverse process. We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works. In practice, DensePure can approximate the label of the high density region in the conditional distribution so that it can enhance certified robustness. We conduct extensive experiments to demonstrate the effectiveness of DensePure by evaluating its certified robustness given a standard model via randomized smoothing. We show that DensePure is consistently better than existing methods on ImageNet, with 7% improvement on average. Project page:https://densepure.github.io/.
1 INTRODUCTION
Diffusion models have been shown to be a powerful image generation tool (Ho et al., 2020; Song et al., 2021b) owing to their iterative diffusion and denoising processes. These models have achieved state-of-the-art performance on sample quality (Dhariwal & Nichol, 2021; Vahdat et al., 2021) as well as effective mode coverage (Song et al., 2021a). A diffusion model usually consists of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time (Song et al., 2021b).
Given the natural denoising property of diffusion models, empirical studies have leveraged them for adversarial purification (Nie et al., 2022; Wu et al., 2022; Carlini et al., 2022). For instance, Nie et al. (2022) employed diffusion models for model purification, DiffPure. They empirically show that by carefully choosing the amount of Gaussian noises added during the diffusion process, adversarial
∗the first four authors contributed equally
perturbations can be removed while preserving the true label semantics. Despite the significant empirical result, there is no provable guarantee of the achieved robustness. A concurrent work (Carlini et al., 2022) instantiated the randomized smoothing approach with the diffusion model to offer a provable guarantee of model robustness against L2-norm bounded adversarial example. However, they do not provide a theoretical understanding of why and how diffusion models contribute to such nontrivial certified robustness.
Our Approach. We are the first to theoretically analyze the fundamental properties of diffusion models to understand why and how diffusion models enhance certified robustness. This deeper understanding allows us to propose a new method DensePure to improve the certified robustness of any given classifier more effectively using diffusion models. An illustration of the DensePure framework is provided in Figure 1, where it consists of a pretrained diffusion model and a pretrained classifier. DensePure incorporates two steps: (i) using the reverse process of the diffusion model to obtain a sample of the posterior data distribution conditioned on the adversarial input; and (ii) repeating the reverse process multiple times with different random seeds to approximate the label of the high-density region in the conditional distribution via a simple majority vote strategy. In particular, given an adversarial input, we repeatedly feed it into the reverse process of the diffusion model to get multiple reversed examples and feed them into the classifier to calculate their labels. We then apply the majority vote on the set of labels to get the final predicted label.
DensePure is inspired by our theoretical analysis, where we show that the reverse process of the diffusion model provides a conditional distribution of the reversed sample given an adversarial input. Sampling from this conditional distribution can enhance the certified robustness. Specifically, we prove that when the data density of clean samples is high, it is a sufficient condition for the conditional density of the reversed samples to be also high. Therefore, in DensePure, samples from the conditional distribution can recover the ground-truth labels with a high probability.
For understanding and rigorous analysis conveniently, we use the highest density point in the conditional distribution as the deterministic reversed sample for the classifier prediction. We show that the robust region for a given sample under the diffusion model’s reverse process is the union of multiple convex sets, each surrounding a region around the ground-truth label. Compared with the robust region of previous work (Cohen et al., 2019), which only focuses on only one region with the ground-truth label, such the union of multiple convex sets has the potential to provide a much larger robust region, resulting in higher certified robustness. Moreover, the characterization implies that the size of robust regions is affected by the relative density and the distance between data regions with the ground-truth label and those with other labels.
We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings to evaluate the certifiable robustness of DensePure. In particular, we follow the setting from Carlini et al. (2022) and rely on randomized smoothing to certify the robustness of the adversarial perturbations bounded in the L2-norm. We show that DensePure achieves a new state-of-the-art certified robustness on the standard pretrained model without further tuning any model parameters (e.g., smooth augmentation Cohen et al. (2019)). On ImageNet, it achieves a consistently higher certified accuracy, 7% improvement on average, than the existing methods among every σ at every radius ϵ .
Technical Contributions. In this paper, we take the first step to understand why and how diffusion models contribute to certified robustness. We make contributions on both theoretical and empirical fronts: (1)in theory, we prove that an adversarial example can be recovered back to the original clean sample with a high probability via the reverse process of a diffusion model. (2) In theory, we characterized the robust region for each point by further taking the highest density point in the conditional
distribution generated by the reverse process as the reversed sample. We show that the robust region for a given sample under the diffusion model’s reverse process has the potential to provide a larger robust region. To the best of our knowledge, this is the first work that characterizes the robust region of using the reverse process of the diffusion model for adversarial purification (3) In practice, we proposed DensePurebased on our theoretical analysis. We demonstrated DensePureis consistently better than existing methods on ImageNet, with 7% improvement on average.
2 PRELIMINARIES AND BACKGROUNDS
Continuous-Time Diffusion Model. The diffusion model has two components: the diffusion process followed by the reverse process. Given an input random variable x0 ∼ p, the diffusion process adds isotropic Gaussian noises to the data so that the diffused random variable at time t is xt = √ αt(x0 + ϵt), s.t., ϵt ∼ N (0, σ2t I), and σ2t = (1 − αt)/αt, and we denote xt ∼ pt. The forward diffusion process can also be defined by the stochastic differential equation
dx = h(x, t)dt+ g(t)dw, (SDE)
where x0 ∼ p, h : Rd × R 7→ Rd is the drift coefficient, g : R 7→ R is the diffusion coefficient, and w(t) ∈ Rn is the standard Wiener process. Under mild conditions B.1, the reverse process exists and removes the added noise by solving the reverse-time SDE (Anderson, 1982)
dx̂ = [h(x̂, t)− g(t)2▽x̂ log pt(x̂)]dt+ g(t)dw, (reverse-SDE)
where dt is an infinitesimal reverse time step, and w(t) is a reverse-time standard Wiener process.
In our context, we use the conventions of VP-SDE (Song et al., 2021b) where h(x; t) := − 12γ(t)x and g(t) := √ γ(t) with γ(t) positive and continuous over [0, 1], such that x(t) =
√ αtx(0) +√
1− αtϵ where αt = e− ∫ t 0 γ(s)ds and ϵ ∼ N (0, I). We use {xt}t∈[0,1] and {x̂t}t∈[0,1] to denote the diffusion process and the reverse process generated by SDE and reverse-SDE respectively, which follow the same distribution.
Discrete-Time Diffusion Model (or DDPM (Ho et al., 2020)). DDPM constructs a discrete Markov chain {x0,x1, · · · ,xi, · · · ,xN} as the forward process for the training data x0 ∼ p, such that P(xi|xi−1) = N (xi; √ 1− βixi−1, βiI), where 0 < β1 < β2 < · · · < βN < 1 are predefined
noise scales such that xN approximates the Gaussian white noise. Denote αi = ∏N
i=1(1− βi), we have P(xi|x0) = N (xi; √ αix0, (1− αi)I), i.e., xt(x0, ϵ) = √ αix0 + (1− αi)ϵ, ϵ ∼ N (0, I).
The reverse process of DDPM learns a reverse direction variational Markov chain pθ(xi−1|xi) = N (xi−1;µθ(xi, i),Σθ(xi, i)). Ho et al. (2020) defines ϵθ as a function approximator to predict ϵ from xi such that µθ(xi, i) = 1√1−βi ( xi − βi√1−αi ϵθ(xi, i) ) . Then the reverse time samples
are generated by x̂i−1 = 1√1−βi ( x̂i − βi√1−αi ϵθ∗(x̂i, i) ) + √ βiϵ, ϵ ∼ N (0, I), and the optimal
parameters θ∗ are obtained by solving θ∗ := argminθ Ex0,ϵ [ ||ϵ− ϵθ( √ αix0 + (1− αi), i)||22 ] .
Randomized Smoothing. Randomized smoothing is used to certify the robustness of a given classifier against L2-norm based perturbation. It transfers the classifier f to a smooth version g(x) = argmaxc Pϵ∼N (0,σ2I)(f(x + ϵ) = c), where g is the smooth classifier and σ is a hyperparameter of the smooth classifier g, which controls the trade-off between robustness and accuracy. Cohen et al. (2019) shows that g(x) induces the certifiable robustness for x under the L2-norm with radius R, where R = σ2 ( Φ−1(pA)− Φ−1(pB) ) ; pA and pB are probability of the most probable class and “runner-up” class respectively; Φ is the inverse of the standard Gaussian CDF. The pA and pB can be estimated with arbitrarily high confidence via Monte Carlo method (Cohen et al., 2019).
3 THEORETICAL ANALYSIS
In this section, we theoretically analyze why and how the diffusion model can enhance the robustness of a given classifier. We will analyze directly on SDE and reverse-SDE as they generate the same
stochastic processes {xt}t∈[0,T ] and the literature works establish an approximation on reverseSDE (Song et al., 2021b; Ho et al., 2020).
We first show that given a diffusion model, solving reverse-SDE will generate a conditional distribution based on the scaled adversarial sample, which will have high density on data region with high data density and near to the adversarial sample in Theorem 3.1. See detailed conditions in B.1. Theorem 3.1. Under conditions B.1, solving equation reverse-SDE starting from time t and sample xa,t = √ αtxa will generate a reversed random variable x̂0 with density P (x̂0 = x|x̂t = xa,t) ∝
p(x) · 1√ (2πσ2t ) n exp
( −||x−xa||22
2σ2t
) , where p is the data distribution, σ2t = 1−αt αt is the variance of
Gaussian noise added at time t in the diffusion process.
Proof. (sketch) Under conditions B.1, we know {xt}t∈[0,1] and {x̂t}t∈[0,1] follow the same distribution, and then the rest proof follows Bayes’ Rule.
Please see the full proofs of this and the following theorems in Appendix B.2. Remark 1. Note that P (x̂0 = x|x̂t = xa,t) > 0 if and only if p(x) > 0, thus the generated reverse sample will be on the data region where we train classifiers.
In Theorem 3.1, the conditional density P (x̂0 = x|x̂t = xa,t) is high if both p(x) and the Gaussian term have high values, i.e., x has high data density and is close to the adversarial sample xa. The latter condition is reasonable since adversarial perturbations are typically bounded due to budget constraints. So the above argument implies that a reversed sample will more likely to have the ground-truth label if data region with the ground-truth label has high data density. For the sake of theoretical analysis and understanding, we take the point with highest conditional density P (x̂0 = x|x̂t = xa,t) as the reversed sample, defined as P(xa; t) := argmaxx P (x̂0 = x|x̂t = xa,t). P(xa; t) is a representative of the high density data region in the conditional distribution and P(·; t) is a deterministic purification model. In the following, we characterize the robust region for data region with ground-truth label under P (·; t). The robust region and robust radius for a general deterministic purification model given a classifier are defined below. Definition 3.2 (Robust Region and Robust Radius). Given a classifier f and a point x0, let G(x0) := {x : f(x) = f(x0)} be the data region where samples have the same label as x0. Then given a deterministic purification model P(· ;ψ) with parameter ψ, we define the robust region of G(x0) under P and f as DfP (G(x0);ψ) := {x : f (P(x;ψ)) = f(x0)}, i.e., the set of x such that purified sample P(x;ψ) has the same label as x0 under f . Further, we define the robust radius of x0 as r f P(x0;ψ) := max { r : x0 + ru ∈ DfP (x0;ψ) , ∀||u||2 ≤ 1 } , i.e., the radius of
maximum inclined ball of DfP (x0;ψ) centered around x0. We will omit P and f when it is clear from the context and write D (G(x0);ψ) and r(x0;ψ) instead. Remark 2. In Definition 3.2, the robust region (resp. radius) is defined for each class (resp. point). When using the point with highest P (x̂0 = x|x̂t = xa,t) as the reversed sample, ψ := t.
Now given a sample x0 with ground-truth label, we are ready to characterize the robust region D (G(x0);ψ) under purification model P(·; t) and classifier f . Intuitively, if the adversarial sample xa is near to x0 (in Euclidean distance), xa keeps the same label semantics of x0 and so as the purified sample P(xa; t), which implies that f (P(xa;ψ)) = f(x0). However, the condition that xa is near to x0 is sufficient but not necessary since we can still achieve f (P(xa;ψ)) = f(x0) if xa is near to any sample x̃0 with f (P(x̃a;ψ)) = f(x0). In the following, we will show that the robust region D (G(x0);ψ) is the union of the convex robust sub-regions surrounding every x̃0 with the same label as x0. The following theorem characterizes the convex robust sub-region and robust region respectively. Theorem 3.3. Under conditions B.1 and classifier f , let x0 be the sample with ground-truth label and xa be the adversarial sample, then (i) the purified sample P(xa; t) will have the ground-truth label if xa falls into the following convex set,
Dsub (x0; t) := ⋂
{x′0:f(x′0 )̸=f(x0)}
{ xa : (xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + ||x′0 − x0||22
2
} ,
and further, (ii) the purified sample P(xa; t) will have the ground-truth label if and only if xa falls into the following set, D (G(x0); t) := ⋃ x̃0:f(x̃0)=f(x0)
Dsub (x̃0; t). In other words, D (G(x0); t) is the robust region for data region G(x0) under P(·; t) and f .
Proof. (sketch) (i). Each convex half-space defined by the inequality corresponds to a x′0 such that f(x′0) ̸= f(x0) where xa within satisfies P (x̂0 = x0|x̂t = xa,t) > P (x̂0 = x′0 | x̂t = xa,t). This implies that P(xa; t) ̸= x′0 and f (P(xa;ψ)) = f(x0). The convexity is due to that the intersection of convex sets is convex. (ii). The “if” follows directly from (i). The “only if” holds because if xa /∈ D (G(x0); t), then exists x̃1 such that f(x̃1) ̸= f(x0) and P (x̂0 = x̃1|x̂t = xa,t) > P (x̂0 = x̃0|x̂t = xa,t) ,∀x̃0 s.t. f(x̃0) = f(x0), and thus f (P(xa;ψ)) ̸= f(x0).
Remark 3. Theorem 3.3 implies that when data region G(x0) has higher data density and larger distances to data regions with other labels, it tends to have larger robust region and points in data region tends to have larger radius. Since adversarial attack typically has small magnitude, with large robust region, the adversarial sample can be recovered to the clean sample with a high probability.
In the literature, people focus more on the robust radius (lower bound) r (G(x0); t) (Cohen et al., 2019; Carlini et al., 2022), which can be obtained by finding the maximum inclined ball inside D (G(x0); t) centering x0. Note that although Dsub (x0; t) is convex, D (G(x0); t) is generally not. Therefore, finding r (G(x0); t) is a non-convex optimization problem. In particular, it can be formulated into a disjunctive optimization problem with integer indicator variables, which is typically NP-hard to solve. One alternative could be finding the maximum inclined ball in Dsub (x0; t), which can be formulated into a convex optimization problem whose optimal value provides a lower bound for r (G(x0); t). However, D (G(x0); t) has the potential to provide much larger robustness radius because it might connect different convex robust sub-regions into one, as shown in Figure 2.
⋃3
i=1Dsub(xi; t), where x0,x1,x2 are
samples with ground-truth label and x3 is a sample with another label. xa = x0+ϵa is an adversarial sample such that P(xa; t) = x1 ̸= x0 and thus the classification is correct but xa is not reversed back to x0. rsub(x0) < r(x0) shows our claim that the union leads to a larger robust radius.
In practice, we cannot guarantee to establish an exact reverse process like reverse-SDE but instead try to establish an approximate reverse process to mimic the exact one. As long as the approximate reverse process is close enough to the exact reverse process, they will generate close enough conditional distributions based on the adversarial sample. Then the density and locations of the data regions in two conditional distributions will not differ much and so is the robust region for each data region. We take the score-based diffusion model in Song et al. (2021b) for an example and demonstrate Theorem 3.4 to bound the KL-divergnece between conditional distributions generated by reverse-SDE and score-based diffusion model. Ho et al. (2020) showed that using variational inference to fit DDPM is equivalent to optimizing an objective resembling score-based diffusion model with a specific weighting scheme, so the results can be extended to DDPM. Theorem 3.4. Under score-based diffusion model Song et al. (2021b) and conditions B.1, we have DKL(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) = JSM(θ, t;λ(·)), where {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] are stochastic processes generated by reverse-SDE and score-based diffusion model respectively, JSM(θ, t;λ(·)) := 12 ∫ t 0 Epτ (x) [ λ(τ) ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ, sθ(x, τ) is the score function to approximate ∇x log pτ (x), and λ : R → R is any weighting scheme used in the training score-based diffusion models.
Proof. (sketch) Let µt and νt be the path measure for reverse processes {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively based on the xa,t. Under conditions B.1, µt and νt are uniquely defined and the KLdivergence can be computed via the Girsanov theorem Oksendal (2013).
Remark 4. Theorem 3.4 shows that if the training loss is smaller, the conditional distributions generated by reverse-SDE and score-based diffusion model are closer, and are the same if the training loss is zero. Furthermore, by the Pinsker’s inequality, the total variation (a distance metric) is upper
bounded by DTV(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) ≤ √ 1 2JSM(θ, t;λ(·)).
4 DENSEPURE
Inspired by the theoretical analysis, we introduce DensePure and show how to calculate its certified robustness radius via the randomized smoothing algorithm.
Framework. Our framework, DensePure, consists of two components: (1) an off-the-shelf diffusion model with reverse process rev and (2) an off-the-shelf base classifier f .
The pipeline of DensePure is shown in Figure 1. Given an input x, we feed it into the reverse process rev of the diffusion model to get the reversed sample rev(x) and then repeat the above process K times to get K reversed samples {rev(x)1, · · · , rev(x)K}. We feed the above K reversed samples into the classifier to get the corresponding prediction {f(rev(x)1), · · · , f(rev(x)K)} and then apply the majority vote, termed MV, on these predictions to get the final predicted label ŷ = MV({f(rev(x)1), · · · , f(rev(x)K)}) = argmaxc ∑K i=1 1{f(rev(x)i) = c} .
Certified Robustness of DensePure with Randomized Smoothing.
In this paragraph, we will illustrate the algorithm to calculate certified robustness of DensePure via RS, which offers robustness guarantees for a model under a L2-norm ball.
In particular, we follow the similar setting of Carlini et al. (2022) which uses a DDPM-based diffusion model. The overall algorithm contains three steps:
(1) Our framework estimates n, the number of steps used for the reverse process of DDPM-based diffusion model. Since Randomized Smoothing (Cohen et al., 2019) adds Gaussian noise ϵ, where ϵ ∼ N (0, σ2I), to data input x to get the randomized data input, xrs = x+ ϵ, we map between the noise required by the randomized example xrs and the noise required by the diffused data xn (i.e., xn ∼ N (xn; √ αnx0, (1 − αn)I)) with n step diffusion processing so that αn = 11+σ2 . In this way, we can compute the corresponding timestep n, where n = argmins{|αs − 11+σ2 | | s ∈ [N ]}. (2). Given the above calculated timestep n, we scale xrs with √ αn to obtain the scaled randomized
smoothing sample √ αnxrs. Then we feed √ αnxrs into the reverse process of the diffusion model by K-times to get the reversed sample set {x̂10, x̂20, · · · , x̂i0, · · · , x̂K0 }. (3). We feed the obtained reversed sample set into a standard off-the-shelf classifier f to get the corresponding predicted labels {f(x̂10), f(x̂20), . . . , f(x̂i0), . . . , f(x̂K0 )}, and apply majority vote, denoted MV(· · ·), on these predicted labels to get the final label for xrs. Fast Sampling. To calculate the reversed sample, the standard reverse process of DDPM-based models require repeatedly applying a “single-step” operation n times to get the reversed sample x̂0 (i.e., x̂0 = Reverse(· · ·Reverse(· · ·Reverse(Reverse( √ αnxrs;n);n − 1); · · · ; i); · · · 1)). Here x̂i−1 = Reverse(x̂i; i) is equivalent to sample x̂i−1 from N (x̂i−1;µθ(x̂i, i),Σθ(x̂i, i)), where µθ(x̂i, i) =
1√ 1−βi ( x̂i − βi√1−αi ϵθ(x̂i, i) ) and Σθ := exp(v log βi + (1− v) log β̃i). Here v is a
parameter learned by DDPM and β̃i = 1−αi−1 1−αi .
To reduce the time complexity, we use the uniform sub-sampling strategy from Nichol & Dhariwal (2021). We uniformly sample a subsequence with size b from the original N -step the reverse process. Note that Carlini et al. (2022) set b = 1 for the “one-shot” sampling, in this way, x̂0 =
1√ αn
(xn − √ 1− αnϵθ( √ αnxrs, n)) is a deterministic value so that the reverse process does
not obtain a posterior data distribution conditioned on the input. Instead, we can tune the number of the sub-sampled DDPM steps to be larger than one (b > 1) to sample from a posterior data distribution conditioned on the input. The details about the fast sampling are shown in appendix C.2.
5 EXPERIMENTS
In this section, we use DensePure to evaluate certified robustness on two standard datasets, CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009).
Experimental settings We follow the experimental setting from Carlini et al. (2022). Specifically, for CIFAR-10, we use the 50-M unconditional improved diffusion model from Nichol & Dhariwal (2021) as the diffusion model. We select ViT-B/16 model Dosovitskiy et al. (2020) pretrained on ImageNet-21k and finetuned on CIFAR-10 as the classifier, which could achieve 97.9% accuracy on CIFAR-10. For ImageNet, we use the unconditional 256×256 guided diffusion model from Dhariwal & Nichol (2021) as the diffusion model and pretrained BEiT large model (Bao et al., 2021) trained on ImageNet-21k as the classifier, which could achieve 88.6% top-1 accuracy on validation set of ImageNet-1k. We select three different noise levels σ ∈ {0.25, 0.5, 1.0} for certification. For the parameters of DensePure , we set K = 40 and b = 10 except the results in ablation study. The details about the baselines are in the appendix.
5.1 MAIN RESULTS
We perform DensePure on the subset of CIFAR-10 or ImageNet. We choose the same subset as in Cohen et al. (2019), 500 samples for CIFAR-10 and 100 samples for ImageNet ( the results with 500 samples are shown in the appendix D.10). The results are shown in Table 1. For CIFAR10, comparing with the models which are carefully trained with randomized smoothing techniques in an end-to-end manner (i.e., w/o off-the-shelf classifier), we observe that our method with the standard off-the-shelf classifier outperforms them at smaller ϵ = {0.25, 0.5} on both CIFAR-10 and ImageNet datasets while achieves comparable performance at larger ϵ = {0.75, 1.0}. Comparing with the non-diffusion model based methods with off-the-shelf classifier (i.e., Denoised (Salman et al., 2020) and Lee (Lee, 2021)), both our method and Carlini et al. (2022) are significantly better
than them. These results verify the non-trivial adversarial robustness improvements introduced from the diffusion model. For ImageNet, our method is consistently better than all priors with a large margin.
Since both Carlini et al. (2022) and DensePure use the diffusion model, to better understand the importance of our design, that approximates the label of the high density region in the conditional distribution, we compare DensePure with Carlini et al. (2022) in a more fine-grained manner.
We show detailed certified robustness of the model among different σ at different radius for CIFAR10 in Figure 3-left and for ImageNet in Figure 3-right. We also present our results of certified accuracy at different ϵ in Appendix D.3. From these results, we find that our method is still consistently better at most ϵ (except ϵ = 0) among different σ. The performance margin between ours and Carlini et al. (2022) will become even larger with a large ϵ. These results further indicate that although the diffusion model improves model robustness, leveraging the posterior data distribution conditioned on the input instance (like DensePure ) via reverse process instead of using single sample ((Carlini et al., 2022)) is the key for better robustness. Additionally, we use the off-the-shelf classifiers, which are the VIT-based architectures trained a larger dataset. In the later ablation study section, we select the CNN-based architecture wide-ResNet trained on standard dataset from scratch. Our method still achieves non-trivial robustness. Further, our experiments in Appendix D.7 shows that removing the diffusion model from DensePure deteriorates the performance. It further verifies that our design is non-trivial.
5.2 ABLATION STUDY
Voting samples (K) We first show how K affects the certified accuracy. For efficiency, we select b = 10. We conduct experiments for both datasets. We show the certified accuracy among different r at σ = 0.25 in Figure 4. The results for σ = 0.5, 1.0 and CIFAR-10 are shown in the Appendix D.4. Comparing with the baseline (Carlini et al., 2022), we find that a larger majority vote number leads to a better certified accuracy. It verifies that DensePure indeed benefits the adversarial robustness and making a good approximation of the label with high density region requires a large number of voting samples. We find that our certified accuracy will almost converge at r = 40. Thus, we set r = 40 for our experiments. The results with other σ show the similar tendency. To further improve the time efficiency, we can use K-Consensus (Horváth et al., 2021). It accelerates the majority vote process by 45% ∼ 60% with a negligible performance drop. The experimental details and results are in Appendix D.8.
Fast sampling steps (b) To investigate the role of b, we conduct additional experiments with b ∈ {2, 5} at σ = 0.25. The results on ImageNet are shown in Figure 4 and results for σ = 0.5, 1.0 and CIFAR-10 are shown in the Appendix D.5. By observing results with majority vote, we find that a larger b can lead to a better certified accuracy since a larger b generates images with higher quality. By observing results without majority vote, the results show opposite conclusions where a larger b leads to a lower certified accuracy, which contradicts to our intuition. We guess the potential reason is that though more sampling steps can normally lead to better image recovery quality, it also brings more randomness, increasing the probability that the reversed image locates into a data region with the wrong label. These results further verify that majority vote is necessary for a better performance.
Different architectures One advantage of DensePure is to use the off-the-shelf classifier so that it can plug in any classifier. We choose Convolutional neural network (CNN)-based architectures: Wide-ResNet28-10 (Zagoruyko & Komodakis, 2016) for CIFAR-10 with 95.1% accuracy and WideResNet50-2 for ImageNet with 81.5% top-1 accuracy, at σ = 0.25. The results are shown in Table 2 and Figure E in Appendix D.6. Results for more model architectures and σ of ImageNet are also shown in Appendix D.6. We show that our method can enhance the certified robustness of any given classifier trained on the original data distribution. Noticeably, although the performance of CNNbased classifier is lower than Transformer-based classifier, DensePure with CNN-based model as the classifier can outperform Carlini et al. (2022) with ViT-based model as the classifier (except ϵ = 0 for CIFAR-10).
6 RELATED WORK
Using an off-the-shelf generative model to purify adversarial perturbations has become an important direction in adversarial defense. Previous works have developed various purification methods based on different generative models, such as GANs (Samangouei et al., 2018), autoregressive generative models (Song et al., 2018), and energy-based models (Du & Mordatch, 2019; Grathwohl et al., 2020; Hill et al., 2021). More recently, as diffusion models (or score-based models) achieve better generation quality than other generative models (Ho et al., 2020; Dhariwal & Nichol, 2021), many works consider using diffusion models for adversarial purification (Nie et al., 2022; Wu et al., 2022; Sun et al., 2022) Although they have found good empirical results in defending against existing adversarial attacks (Nie et al., 2022), there is no provable guarantee about the robustness about such methods. On the other hand, certified defenses provide guarantees of robustness (Mirman et al., 2018; Cohen et al., 2019; Lecuyer et al., 2019; Salman et al., 2020; Horváth et al., 2021; Zhang et al., 2018; Raghunathan et al., 2018a;b; Salman et al., 2019b; Wang et al., 2021). They provide a lower bounder of model accuracy under constrained perturbations. Among them, approaches Lecuyer et al. (2019); Cohen et al. (2019); Salman et al. (2019a); Jeong & Shin (2020); Zhai et al. (2020); Horváth et al. (2021); Jeong et al. (2021); Salman et al. (2020); Lee (2021); Carlini et al. (2022) based on randomized smoothing (Cohen et al., 2019) show the great scalability and achieve promising performance on large network and dataset. The most similar work to us is Carlini et al. (2022), which uses diffusion models combined with standard classifiers for certified defense. They view diffusion model as blackbox without having a theoretical under- standing of why and how the diffusion models contribute to such nontrivial certified robustness.
7 CONCLUSION
In this work, we theoretically prove that the diffusion model could purify adversarial examples back to the corresponding clean sample with high probability, as long as the data density of the corresponding clean samples is high enough. Our theoretical analysis characterizes the conditional distribution of the reversed samples given the adversarial input, generated by the diffusion model reverse process. Using the highest density point in the conditional distribution as the deterministic reversed sample, we identify the robust region of a given instance under the diffusion model reverse process, which is potentially much larger than previous methods. Our analysis inspires us to propose an effective pipeline DensePure, for adversarial robustness. We conduct comprehensive experiments to show the effectiveness of DensePure by evaluating the certified robustness via the randomized smoothing algorithm. Note that DensePure is an off-the-shelf pipeline that does not require training a smooth classifier. Our results show that DensePure achieves the new SOTA certified robustness for perturbation with L2-norm. We hope that our work sheds light on an in-depth understanding of the diffusion model for adversarial robustness.
Limitations. The time complexity of DensePure is high since it requires repeating the reverse process multiple times. In this paper, we use fast sampling to reduce the time complexity and show that the setting (b = 2 and K = 10) can achieve nontrivial certified accuracy. We leave the more advanced fast sampling strategy as the future direction.
ETHICS STATEMENT
Our work can positively impact the society by improving the robustness and security of AI systems. We have not involved human subjects or data set releases; instead, we carefully follow the provided licenses of existing data and models for developing and evaluating our method.
8 ACKNOWLEDGMENT
We thank the support of NSF grant No.1910100, NSF CNS 2046726, C3 AI and DHS under grant No. 17STQAC00001-06-00, DARPA under grant N66001-15-C-4066, the Center for Long-Term Cybersecurity, and Berkeley Deep Drive. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors, and do not necessarily reflect the views of the sponsors.
REPRODUCIBILITY STATEMENT
For theoretical analysis, all necessary assumptions are listed in B.1 and the complete proofs are included in B.2. The experimental setting and datasets are provided in section 5. The pseudo-code for DensePure is in C.1 and the fast sampling procedures are provided in C.2.
APPENDIX
Here is the appendix.
A NOTATIONS
p data distribution
P(A) probability of event A Ck set of functions with continuous k-th derivatives w(t) standard Wiener Process
w(t) reverse-time standard Wiener Process
h(x, t) drift coefficient in SDE
g(t) diffusion coefficient in SDE
αt scaling coefficient at time t
σ2t variance of added Gaussian noise at time t
{xt}t∈[0,1] diffusion process generated by SDE {x̂t}t∈[0,1] reverse process generated by reverse-SDE pt distribution of xt and x̂t {x1,x2, . . . ,xN} diffusion process generated by DDPM {βi}Ni=1 pre-defined noise scales in DDPM ϵa adversarial attack
xa adversarial sample
xa,t scaled adversarial sample
f(·) classifier g(·) smoothed classifier P (x̂0 = x|x̂t = xa,t) density of conditional distribution generated by reverseSDE based on xa,t P(xa; t) purification model with highest density point G(x0) data region with the same label as x0 DfP(G(x0); t) robust region for G(x0) associated with base classifier f and purification model P rfP(x0; t) robust radius for the point associated with base classifier f and purification model P Dsub(x0; t) convex robust sub-region sθ(x, t) score function {xθt }t∈[0,1] reverse process generated by score-based diffusion model P ( xθ0 = x|xθt = xa,t
) density of conditional distribution generated by scorebased diffusion model based on xa,t
λ(τ) weighting scheme of training loss for score-based diffusion model
JSM(θ, t;λ(·)) truncated training loss for score-based diffusion model µt,νt path measure for {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively
B MORE DETAILS ABOUT THEORETICAL ANALYSIS
B.1 ASSUMPTIONS
(i) The data distribution p ∈ C2 and Ex∼p[||x||22] <∞.
(ii) ∀t ∈ [0, T ] : h(·, t) ∈ C1,∃C > 0,∀x ∈ Rn, t ∈ [0, T ] : ||h(x, t)||2 ⩽ C (1 + ||x||2). (iii) ∃C > 0,∀x,y ∈ Rn : ||h(x, t)− h(y, t)||2 ⩽ C∥x− y∥2. (iv) g ∈ C and ∀t ∈ [0, T ], |g(t)| > 0. (v) ∀t ∈ [0, T ] : sθ(·, t) ∈ C1,∃C > 0,∀x ∈ Rn, t ∈ [0, T ] : ||sθ(x, t)||2 ⩽ C (1 + ||x||2).
(vi) ∃C > 0,∀x,y ∈ Rn : ||sθ(x, t)− sθ(y, t)||2 ⩽ C∥x− y∥2.
B.2 THEOREMS AND PROOFS
Theorem 3.1. Under conditions B.1, solving equation reverse-SDE starting from time t and point xa,t = √ αtxa will generate a reversed random variable x̂0 with conditional distribution
P (x̂0 = x|x̂t = xa,t) ∝ p(x) · 1√
(2πσ2t ) n e
−||x−xa||22 2σ2t
where σ2t = 1−αt αt
is the variance of the Gaussian noise added at timestamp t in the diffusion process SDE.
Proof. Under the assumption, we know {xt}t∈[0,1] and {x̂t}t∈[0,1] follow the same distribution, which means
P (x̂0 = x|x̂t = xa,t) = P(x̂0 = x, x̂t = xa,t)
P(x̂t = xa,t)
= P(x0 = x,xt = xa,t)
P(xt = xa,t)
= P (x0 = x) P(xt = xa,t|x0 = x)
P(xt = xa,t)
∝ P (x0 = x) 1√
(2πσ2t ) n e
−||x−xa||22 2σ2t
= p(x) · 1√ (2πσ2t ) n e −||x−xa||22 2σ2t
where the third equation is due to the chain rule of probability and the last equation is a result of the diffusion process.
Theorem 3.3. Under conditions B.1 and classifier f , let x0 be the sample with ground-truth label and xa be the adversarial sample, then (i) the purified sample P(xa; t) will have the ground-truth label if xa falls into the following convex set, Dsub (x0; t) := ⋂
{x′0:f(x′0 )̸=f(x0)}
{ xa : (xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + ||x′0 − x0||22
2
} ,
and further, (ii) the purified sample P(xa; t) will have the ground-truth label if and only if xa falls into the following set, D (G(x0); t) := ⋃ x̃0:f(x̃0)=f(x0)
Dsub (x̃0; t). In other words, D (G(x0); t) is the robust region for data region G(x0) under P(·; t) and f .
Proof. We start with part (i).
The main idea is to prove that a point x′0 such that f(x ′ 0) ̸= f(x0) should have lower density than x0 in the conditional distribution in Theorem 3.1 so that P(xa; t) cannot be x′0. In other words, we should have
P (x̂0 = x0|x̂t = xa,t) > P (x̂0 = x′0 | x̂t = xa,t) .
By Theorem 3.1, this is equivalent to
p(x0) · 1√
(2πσ2t ) n e
−||x0−xa|| 2 2 2σ2t > p(x′0) · 1√
(2πσ2t ) n e
−||x′0−xa|| 2 2
2σ2t
⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( ||x0 − xa||22 − ||x′0 − xa||22 ) ⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( ||x0 − xa||22 − ||x′0 − x0 + x0 − xa||22 ) ⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( 2(xa − x0)⊤(x′0 − x0)− ∥x′0 − x0∥22 ) .
Re-organizing the above inequality, we obtain
(xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + 1
2 ||x′0 − x0||22.
Note that the order of xa is at most one in every term of the above inequality, so the inequality actually defines a half-space in Rn for every (x0,x′0) pair. Further, we have to satisfy the inequality for every x′0 such that f(x ′ 0) ̸= f(x0), therefore, by intersecting over all such half-spaces, we obtain a convex Dsub (x0; t). Then we prove part (ii).
On the one hand, if xa ∈ D (G(x0); t), then there exists one x̃0 such that f(x̃0) = f(x0) and xa ∈ Dsub (x̃0; t). By part (i), x̃0 has higher probability than all other points with different labels from x0 in the conditional distribution P (x̂0 = x|x̂t = xa,t) characterized by Theorem 3.1. Therefore, P(xa; t) should have the same label as x0. On the other hand, if xa /∈ D (G(x0); t), then there is a point x̃1 with different label from x0 such that for any x̃0 with the same label as x0, P (x̂0 = x̃1|x̂t = xa,t) > P (x̂0 = x̃0|x̂t = xa,t). In other words, P(xa; t) would have different label from x0.
Theorem 3.4. Under score-based diffusion model Song et al. (2021b) and conditions B.1, we can bound
DKL(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) = JSM(θ, t;λ(·))
where {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] are stochastic processes generated by reverse-SDE and scorebased diffusion model respectively,
JSM(θ, t;λ(·)) := 1
2 ∫ t 0 Epτ (x) [ λ(τ) ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ,
sθ(x, τ) is the score function to approximate∇x log pτ (x), and λ : R→ R is any weighting scheme used in the training score-based diffusion models.
Proof. Similar to proof of (Song et al., 2021a, Theorem 1), let µt and νt be the path measure for reverse processes {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively based on the scaled adversarial sample xa,t. Under conditions B.1, the KL-divergence can be computed via the Girsanov theorem Oksendal
(2013): DKL ( P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t) ) = − Eµt [ log
dνt dµt ] (i) = Eµt [∫ t 0 g(τ) (∇x log pτ (x)− sθ(x, τ)) dwτ + 1 2 ∫ t 0 g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 dτ ]
= Eµt [ 1
2 ∫ t 0 g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 dτ ]
= 1
2 ∫ τ 0 Epτ (x) [ g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ
= JSM ( θ, t; g(·)2 ) where (i) is due to Girsanov Theorem and (ii) is due to the martingale property of Itô integrals.
C MORE DETAILS ABOUT DENSEPURE
C.1 PSEUDO-CODE
We provide the pseudo code of DensePure in Algo. 1 and Alg. 2
Algorithm 1 DensePure pseudo-code with the highest density point 1: Initialization: choose off-the-shelf diffusion model and classifier f , choose ψ = t, 2: Input sample xa = x0 + ϵa 3: Compute x̂0 = P(xa;ψ) 4: ŷ = f(x̂0)
Algorithm 2 DensePure pseudo-code with majority vote 1: Initialization: choose off-the-shelf diffusion model and classifier f , choose σ 2: Compute αn = 11+σ2 , n = argmins {∣∣∣αs − 11+σ2 ∣∣∣ | s ∈ {1, 2, · · · , N}} 3: Generate input sample xrs = x0 + ϵ, ϵ ∼ N (0, σ2I) 4: Choose schedule Sb, get x̂i0 ← rev( √ αnxrs)i, i = 1, 2, . . . ,K with Fast Sampling
5: ŷ = MV({f(x̂10), . . . , f(x̂K0 )}) = argmaxc ∑K i=1 1{f(x̂i0) = c}
C.2 DETAILS ABOUT FAST SAMPLING
Applying single-step operation n times is a time-consuming process. In order to reduce the time complexity, we follow the method used in (Nichol & Dhariwal, 2021) and sample a subsequence Sb with b values (i.e., Sb = {n, ⌊n− n
b ⌋, · · · , 1}︸ ︷︷ ︸
b
, where Sbj is the j-th element in S b and Sbj =
⌊n − jnb ⌋,∀j < b and S b b = 1) from the original schedule S (i.e., S = {n, n− 1, · · · , 1}︸ ︷︷ ︸
n
, where
Sj = j is the j-th element in S).
Within this context, we adapt the original α schedule αS = {α1, · · · , αi, · · · , αn} used for singlestep to the new schedule αS b
= {αSb1 , · · · , αSbj , · · · , αSbb} (i.e., α Sb i = αSbi = αS⌊n− inb ⌋ is the
i-th element in αS b ). We calculate the corresponding βS b = {βSb1 , βS b 2 , · · · , βS b i , · · · , βS b b } and β̃S b
= {β̃Sb1 , β̃S b 2 , · · · , β̃S b i , · · · , β̃S b b } schedules, where βSbi = β Sb i = 1 − αS
b
i
αS b i−1 , β̃Sbi =
β̃S b i = 1−αS
b
i−1
1−αSbi βSbi . With these new schedules, we can use b times reverse steps to calculate
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 88.0 73.8 56.2 41.6 0.0 Carlini (Carlini et al., 2022) σ = 0.5 74.2 62.0 50.4 40.2 31.0
σ = 1.0 49.4 41.4 34.2 27.8 21.8
σ = 0.25 87.6(-0.4) 76.6(+2.8) 64.6(+8.4) 50.4(+8.8) 0.0(+0.0) Ours σ = 0.5 73.6(-0.6) 65.4(+3.4) 55.6(+5.2) 46.0(+5.8) 37.4(+6.4)
σ = 1.0 55.0(+5.6) 47.8(+6.4) 40.8(+6.6) 33.0(+5.2) 28.2(+6.4)
Table A: Certified accuracy compared with Carlini et al. (2022) for CIFAR-10 at all σ. The numbers in the bracket are the difference of certified accuracy between two methods. Our diffusion model and classifier are the same as Carlini et al. (2022).
x̂0 = Reverse(· · ·Reverse(Reverse(xn;Sbb);Sbb−1); · · · ; 1)︸ ︷︷ ︸ b . Since Σθ(xSbi , S b i ) is parameterized as a range between βS b and β̃S b
, it will automatically be rescaled. Thus, x̂Sbi−1 = Reverse(x̂Sbi ;S b i )
is equivalent to sample xSbi−1 from N (xSbi−1 ;µθ(xSbi , S b i ),Σθ(xSbi , S b i )).
D MORE EXPERIMENTAL DETAILS AND RESULTS
D.1 IMPLEMENTATION DETAILS
We select three different noise levels σ ∈ {0.25, 0.5, 1.0} for certification. For the parameters of DensePure , The sampling numbers when computing the certified radius are n = 100, 000 for CIFAR-10 and n = 10, 000 for ImageNet. We evaluate the certified robustness on 500 samples subset of CIFAR-10 testset and 100 samples subset of ImageNet validation set. we set K = 40 and b = 10 except the results in ablation study.
D.2 BASELINES.
We select randomized smoothing based methods including PixelDP (Lecuyer et al., 2019), RS (Cohen et al., 2019), SmoothAdv (Salman et al., 2019a), Consistency (Jeong & Shin, 2020), MACER (Zhai et al., 2020), Boosting (Horváth et al., 2021) , SmoothMix (Jeong et al., 2021), Denoised (Salman et al., 2020), Lee (Lee, 2021), Carlini (Carlini et al., 2022) as our baselines. Among them, PixelDP, RS, SmoothAdv, Consistency, MACER, and SmoothMix require training a smooth classifier for a better certification performance while the others do not. Salman et al. and Lee use the off-the-shelf classifier but without using the diffusion model. The most similar one compared with us is Carlini et al., which also uses both the off-the-shelf diffusion model and classifier. The above two settings mainly refer to Carlini et al. (2022), which makes us easier to compared with their results.
D.3 MAIN RESULTS FOR CERTIFIED ACCURACY
We compare with Carlini et al. (2022) in a more fine-grained version. We provide results of certified accuracy at different ϵ in Table A for CIFAR-10 and Table B for ImageNet. We include the accuracy difference between ours and Carlini et al. (2022) in the bracket in Tables. We can observe from the tables that the certified accuracy of our method outperforms Carlini et al. (2022) except ϵ = 0 at σ = 0.25, 0.5 for CIFAR-10.
D.4 EXPERIMENTS FOR VOTING SAMPLES
Here we provide more experiments with σ ∈ {0.5, 1.0} and b = 10 for different voting samplesK in Figure A and Figure B. The results for CIFAR-10 is in Figure G. We can draw the same conclusion mentioned in the main context .
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 77.0 71.0 0.0 0.0 0.0 0.0 Carlini (Carlini et al., 2022) σ = 0.5 74.0 67.0 54.0 46.0 0.0 0.0
σ = 1.0 59.0 53.0 49.0 38.0 29.0 22.0
σ = 0.25 80.0(+3.0) 76.0(+5.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) Ours σ = 0.5 75.0(+1.0) 72.0(+5.0) 62.0(+8.0) 49.0(+3.0) 0.0(+0.0) 0.0(+0.0)
σ = 1.0 61.0(+2.0) 57.0(+4.0) 53.0(+4.0) 49.0(+11.0) 37.0(+8.0) 26.0(+4.0)
Table B: Certified accuracy compared with Carlini et al. (2022) for ImageNet at all σ. The numbers in the bracket are the difference of certified accuracy between two methods. Our diffusion model and classifier are the same as Carlini et al. (2022).
CIFAR=10 ImageNet
Figure A: Certified accuracy among different vote numbers with different radius. Each line in the figure represents the certified accuracy among different vote numbers K with Gaussian noise σ = 0.50.
D.5 EXPERIMENTS FOR FAST SAMPLING STEPS
We also implement additional experiments with b ∈ {1, 2, 10} at σ = 0.5, 1.0. The results are shown in Figure C and Figure D. The results for CIFAR-10 are in Figure G. We draw the same conclusion as mentioned in the main context.
D.6 EXPERIMENTS FOR DIFFERENT ARCHITECTURES
We try different model architectures of ImageNet including Wide ResNet-50-2 and ResNet 152 with b = 2 andK = 10. The results are shown in Figure F. We find that our method outperforms (Carlini et al., 2022) for all σ among different classifiers.
D.7 EXPERIMENTS FOR RANDOMIZED SMOOTHING WITHOUT DIFFUSION MODEL
To show the effectiveness of our diffusion model design, we remove the diffusion model from our pipeline and conduct experiments. Specifically, first, we remove the diffusion model and perform randomized smoothing only on the pretrained classifier that we used in DensePure (i.e., ViT-B/16 for CIFAR-10 and BEiT for ImageNet). The results are shown in Table C and Table D. The number in the bracket is the robust accuracy of pretrained classifier - the robust accuracy of DensePure. From the result, we conclude that without the help of diffusion models, neither ViT nor BEiT could reach high certified accuracy.
Second, we conduct additional experiments to fairly compare with randomized smoothing without diffusion models under majority vote settings. Specifically, we activate droppath in BEiT at the inference stage to support majority votes. The other settings are the same as DensePure. The results are shown in Table E. The number in the bracket is calculated by the robust accuracy of BeiT with majority votes - the robust accuracy of DensePure. We find that simply performing majority votes on the BeiT classifier will not result in higher certified robustness.
CIFAR=10 ImageNet
Figure B: Certified accuracy among different vote numbers with different radius. Each line in the figure represents the certified accuracy among different vote numbers K with Gaussian noise σ = 1.00.
CIFAR=10 ImageNet
Figure C: Certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 0.50.
Third, to compare with randomized smoothing without diffusion model, we also evaluate certified accuracy with Gaussian augmentation-trained ViT models on CIFAR-10. The results shown in the table F prove that DensePure can still achieve higher certified accuracy than randomized smoothing on even Gaussian augmented models without diffusion models. The numbers in the bracket are the difference between the robust accuracy of Gaussian augmentation randomized smoothing and DensePure.
D.8 EXPERIMENTS FOR K-CONSENSUS AGGREGATION
To improve the efficient of our algorithm, we try the K-consensus Aggregation, where an early stop will be triggered if the classification results of the K consecutive reversed samples are the same. Here we calculate the certified robustness for 100 subsamples of CIFAR-10 and ImageNet with 2 sampling steps, a maximum 10 majority votes and consensus threshold k=3. Results are shown in Table G and Table H. The column of ”Avg MV” in the tables means the average of the actual number of majority votes required for our algorithm. For instance, if the predicted labels of the first 3 reversed samples are the same, the actual majority vote numbers will be 3. The numbers in the bracket are the difference between certified accuracy w/o K-Consensus Aggregation.
D.9 EXPERIMENTS FOR CERTIFIED ACCURACY WITH LESS SAMPLING STEPS AND VOTE NUMBERS
We also conduct additional experiments with 2 sampling steps and 5 majority votes. The results are shown in Table I. We find that our method still achieves better results than the existing method.
CIFAR=10 ImageNet
Figure D: Certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 1.00.
CIFAR=10 ImageNet
Figure E: Certified accuracy with different architectures. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 0.25.
D.10 EXPERIMENTS FOR DENSEPURE 500 TEST SAMPLING NUMBER RESULTS ON IMAGENET
We increase the ImageNet test sampling number from 100 to 500 and update the experiment results in Table J and Table K. We can draw the similar conclusion.
Wide ResNet-50-2 ResNet152 Figure F: Certified accuracy of ImageNet for different architectures. The lines represent the certified accuracy with different L2 perturbation bound with different Gaussian noise σ ∈ {0.25, 0.50, 1.00}.
ImageNet ImageNet
Figure G: Ablation study. The left image shows the certified accuracy among different vote numbers with different radius ϵ ∈ {0.0, 0.25, 0.5, 0.75}. Each line in the figure represents the certified accuracy of our method among different vote numbers K with Gaussian noise σ = 0.25. The right image shows the certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound.
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 20.8(-66.8) 7.4(-69.2) 1.8(-62.8) 0.2(-50.2) 0.0(+0.0) σ = 0.5 11.6(-62.0) 6.6(-58.8) 3.8(-51.8) 1.2(-44.8) 0.2(-37.2) σ = 1.0 10.6(-44.4) 10.6(-37.4) 9.4(-31.4) 9.4(-23.6) 9.4(-18.8)
Table C: Certified accuracy of randomized smoothing on pretrained classifier ViT-B/16 at all σ for CIFAR-10
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 73.2(-10.8) 55.8(-22.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) σ = 0.5 7.8(-72.4) 4.6(-71.0) 3.2(-63.8) 1.0(-53.6) 0.0(+0.0) 0.0(+0.0) σ = 1.0 0.0(-67.8) 0.0(-61.4) 0.0(-55.6) 0.0(-50.0) 0.0(-42.2) 0.0(-25.8)
Table D: Certified accuracy of randomized smoothing on pretrained classifier BEiT at all σ for ImageNet
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 73.8(-10.2) 58.0(-19.8) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) σ = 0.5 9.0(-71.2) 7.0(-68.6) 4.0(-63.0) 2.0(-52.6) 0.0(+0.0) 0.0(+0.0) σ = 1.0 0.0(-67.8) 0.0(-61.4) 0.0(-55.6) 0.0(-50.0) 0.0(-42.2) 0.0(-25.8)
Table E: Certified accuracy of randomized smoothing on droppatch activated BEiT with 10 majority votes at all σ for ImageNet
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 88.2(+0.6) 71.4(-5.2) 53.2(-11.4) 35.2(-15.2) 0.0(+0.0) σ = 0.5 69.8(-3.8) 60.0(-5.4) 48.4(-7.2) 37.2(-8.8) 27.2(-10.2) σ = 1.0 49.0(-6.0) 41.8(-6.0) 34.0(-6.8) 27.0(-6.0) 22.0(-6.2)
Table F: Certified accuracy of randomized smoothing on Gaussian augmentation-trained ViT at all σ on CIFAR-10
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0 Avg MV
σ = 0.25 92(+0.0) 77(+0.0) 60(+0.0) 48(-1.0) 0(+0.0) 3.84 σ = 0.5 74(+0.0) 65(+0.0) 53(-1.0) 45(+0.0) 40(+0.0) 4.43 σ = 1.0 53(+0.0) 46(+0.0) 42(+0.0) 31(+0.0) 25(+0.0) 5.49
Table G: Certified accuracy and average majority votes with 2 sample steps and k = 3 consensus threshold at all σ for CIFAR-10.
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0 Avg MV
σ = 0.25 78(+0.0) 74(+0.0) 0(+0.0) 0(+0.0) 0(+0.0) 0(+0.0) 3.34 σ = 0.5 75(+0.0) 69(+0.0) 61(+0.0) 47(+0.0) 0(+0.0) 0(+0.0) 3.89 σ = 1.0 60(+0.0) 54(+0.0) 50(+0.0) 41(+0.0) 32(+0.0) 23(+0.0) 5.23
Table H: Certified accuracy and average majority votes with 2 sample steps and k = 3 consensus threshold at all σ for ImageNet.
Certified Accuracy at ϵ(%) CIFAR-10 ImageNet
Method Off-the-shelf 0.25 0.5 0.75 1.0 0.5 1.0 1.5 2.0 3.0
PixelDP (Lecuyer et al., 2019) ✗ (71.0)22.0 (44.0)2.0 - - (33.0)16.0 - - - - RS (Cohen et al., 2019) ✗ (75.0)61.0 (75.0)43.0 (65.0)32.0 (65.0)23.0 (67.0)49.0 (57.0)37.0 (57.0)29.0 (44.0)19.0 (44.0)12.0 SmoothAdv (Salman et al., 2019a) ✗ (82.0)68.0 (76.0)54.0 (68.0)41.0 (64.0)32.0 (63.0)54.0 (56.0)42.0 (56.0)34.0 (41.0)26.0 (41.0)18.0 Consistency (Jeong & Shin, 2020) ✗ (77.8)68.8 (75.8)58.1 (72.9)48.5 (52.3)37.8 (55.0)50.0 (55.0)44.0 (55.0)34.0 (41.0)24.0 (41.0)17.0 MACER (Zhai et al., 2020) ✗ (81.0)71.0 (81.0)59.0 (66.0)46.0 (66.0)38.0 (68.0)57.0 (64.0)43.0 (64.0)31.0 (48.0)25.0 (48.0)14.0 Boosting (Horváth et al., 2021) ✗ (83.4)70.6 (76.8)60.4 (71.6)52.4 (73.0)38.8 (65.6)57.0 (57.0)44.6 (57.0)38.4 (44.6)28.6 (38.6)21.2 SmoothMix (Jeong et al., 2021) ✓ (77.1)67.9 (77.1)57.9 (74.2)47.7 (61.8)37.2 (55.0)50.0 (55.0)43.0 (55.0)38.0 (40.0)26.0 (40.0)17.0
Denoised (Salman et al., 2020) ✓ (72.0)56.0 (62.0)41.0 (62.0)28.0 (44.0)19.0 (60.0)33.0 (38.0)14.0 (38.0)6.0 - - Lee (Lee, 2021) ✓ 60.0 42.0 28.0 19.0 41.0 24.0 11.0 - - Carlini (Carlini et al., 2022) ✓ (88.0)73.8 (88.0)56.2 (88.0)41.6 (74.2)31.0 (82.0)74.0 (77.2.0)59.8 (77.2)47.0 (64.6)31.0 (64.6)19.0 Ours ✓ (87.6)76.6 (87.6)64.6 (87.6)50.4 (73.6)37.4 (84.0)77.8 (80.2)67.0 (80.2)54.6 (67.8)42.2 (67.8)25.8
Table J: Certified accuracy compared with existing works. The certified accuracy at ϵ = 0 for each model is in the parentheses. The certified accuracy for each cell is from the respective papers except Carlini et al. (2022). Our diffusion model and classifier are the same as Carlini et al. (2022), where the off-the-shelf classifier uses ViT-based architectures trained on a large dataset (ImageNet-22k).
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 82.0 74.0 0.0 0.0 0.0 0.0 Carlini (Carlini et al., 2022) σ = 0.5 77.2 71.8 59.8 47.0 0.0 0.0
σ = 1.0 64.6 57.8 49.2 40.6 31.0 19.0
σ = 0.25 84.0(+2.0) 77.8(+3.8) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) Ours σ = 0.5 80.2(+3.0) 75.6(+3.8) 67.0(+7.2) 54.6(+7 | 1. What is the focus of the paper regarding potential classification problems?
2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis?
3. What are the weaknesses of the paper, especially regarding the experimental design and claims?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns regarding the application of diffusion models for adversarial sample classification? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to solve the potential classification problem of the adversarial samples with the help of diffusion models. Specifically, the authors firstly give some analysis of the regions of the conditional generation. They claim that the conditionally generated samples will concentrate on the regions around the original adversarial sample. Then through majority voting, the proposed method can help find the correct labels. Experiments show that the proposed method does help improve the performance.
Strengths And Weaknesses
Strength:
The paper gives some theoretical analysis about the conditional generation, which can help clarify the understanding of the problem.
The experiments show that the proposed method can improve the robust of the classification of adversarial inputs.
Weakness:
In general, it seems that the classification of the adversarial samples is overkilled. To improve the robustness, the authors need to run the time-consuming diffusion models more than 10 times to get the conditional generation and then run the classifiers. It is unreasonable.
The theory part just includes some clarifications of common sense, it hard to get some novel ideas through the definitions or theorems.
It is also unclear if it is proper to bound the complex data support with hyper-balls. For example, in the data manifold, if the data labeled by 1 is supported in a 2-dimensional rectangular like region with large length and width ratio, can we use definition 3.2 or theorem 3.3 to measure the results?
In theorem 3.4, KL divergence may not be a good method to measure the similarity of two distributions since it is not a distance.
Clarity, Quality, Novelty And Reproducibility
The paper is in general clear and easy to follow. And there seems no reproducibility problem. But the novelty is limited, and the theory cannot support the claim very well. |
ICLR | Title
DensePure: Understanding Diffusion Models for Adversarial Robustness
Abstract
Diffusion models have been recently employed to improve certified robustness through the process of denoising. However, the theoretical understanding of why diffusion models are able to improve the certified robustness is still lacking, preventing from further improvement. In this study, we close this gap by analyzing the fundamental properties of diffusion models and establishing the conditions under which they can enhance certified robustness. This deeper understanding allows us to propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. classifier). Given an (adversarial) input, DensePure consists of multiple runs of denoising via the reverse process of the diffusion model (with different random seeds) to get multiple reversed samples, which are then passed through the classifier, followed by majority voting of inferred labels to make the final prediction. This design of using multiple runs of denoising is informed by our theoretical analysis of the conditional distribution of the reversed sample. Specifically, when the data density of a clean sample is high, its conditional density under the reverse process in a diffusion model is also high; thus sampling from the latter conditional distribution can purify the adversarial example and return the corresponding clean sample with a high probability. By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model’s reverse process. We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works. In practice, DensePure can approximate the label of the high density region in the conditional distribution so that it can enhance certified robustness. We conduct extensive experiments to demonstrate the effectiveness of DensePure by evaluating its certified robustness given a standard model via randomized smoothing. We show that DensePure is consistently better than existing methods on ImageNet, with 7% improvement on average. Project page:https://densepure.github.io/.
1 INTRODUCTION
Diffusion models have been shown to be a powerful image generation tool (Ho et al., 2020; Song et al., 2021b) owing to their iterative diffusion and denoising processes. These models have achieved state-of-the-art performance on sample quality (Dhariwal & Nichol, 2021; Vahdat et al., 2021) as well as effective mode coverage (Song et al., 2021a). A diffusion model usually consists of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time (Song et al., 2021b).
Given the natural denoising property of diffusion models, empirical studies have leveraged them for adversarial purification (Nie et al., 2022; Wu et al., 2022; Carlini et al., 2022). For instance, Nie et al. (2022) employed diffusion models for model purification, DiffPure. They empirically show that by carefully choosing the amount of Gaussian noises added during the diffusion process, adversarial
∗the first four authors contributed equally
perturbations can be removed while preserving the true label semantics. Despite the significant empirical result, there is no provable guarantee of the achieved robustness. A concurrent work (Carlini et al., 2022) instantiated the randomized smoothing approach with the diffusion model to offer a provable guarantee of model robustness against L2-norm bounded adversarial example. However, they do not provide a theoretical understanding of why and how diffusion models contribute to such nontrivial certified robustness.
Our Approach. We are the first to theoretically analyze the fundamental properties of diffusion models to understand why and how diffusion models enhance certified robustness. This deeper understanding allows us to propose a new method DensePure to improve the certified robustness of any given classifier more effectively using diffusion models. An illustration of the DensePure framework is provided in Figure 1, where it consists of a pretrained diffusion model and a pretrained classifier. DensePure incorporates two steps: (i) using the reverse process of the diffusion model to obtain a sample of the posterior data distribution conditioned on the adversarial input; and (ii) repeating the reverse process multiple times with different random seeds to approximate the label of the high-density region in the conditional distribution via a simple majority vote strategy. In particular, given an adversarial input, we repeatedly feed it into the reverse process of the diffusion model to get multiple reversed examples and feed them into the classifier to calculate their labels. We then apply the majority vote on the set of labels to get the final predicted label.
DensePure is inspired by our theoretical analysis, where we show that the reverse process of the diffusion model provides a conditional distribution of the reversed sample given an adversarial input. Sampling from this conditional distribution can enhance the certified robustness. Specifically, we prove that when the data density of clean samples is high, it is a sufficient condition for the conditional density of the reversed samples to be also high. Therefore, in DensePure, samples from the conditional distribution can recover the ground-truth labels with a high probability.
For understanding and rigorous analysis conveniently, we use the highest density point in the conditional distribution as the deterministic reversed sample for the classifier prediction. We show that the robust region for a given sample under the diffusion model’s reverse process is the union of multiple convex sets, each surrounding a region around the ground-truth label. Compared with the robust region of previous work (Cohen et al., 2019), which only focuses on only one region with the ground-truth label, such the union of multiple convex sets has the potential to provide a much larger robust region, resulting in higher certified robustness. Moreover, the characterization implies that the size of robust regions is affected by the relative density and the distance between data regions with the ground-truth label and those with other labels.
We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings to evaluate the certifiable robustness of DensePure. In particular, we follow the setting from Carlini et al. (2022) and rely on randomized smoothing to certify the robustness of the adversarial perturbations bounded in the L2-norm. We show that DensePure achieves a new state-of-the-art certified robustness on the standard pretrained model without further tuning any model parameters (e.g., smooth augmentation Cohen et al. (2019)). On ImageNet, it achieves a consistently higher certified accuracy, 7% improvement on average, than the existing methods among every σ at every radius ϵ .
Technical Contributions. In this paper, we take the first step to understand why and how diffusion models contribute to certified robustness. We make contributions on both theoretical and empirical fronts: (1)in theory, we prove that an adversarial example can be recovered back to the original clean sample with a high probability via the reverse process of a diffusion model. (2) In theory, we characterized the robust region for each point by further taking the highest density point in the conditional
distribution generated by the reverse process as the reversed sample. We show that the robust region for a given sample under the diffusion model’s reverse process has the potential to provide a larger robust region. To the best of our knowledge, this is the first work that characterizes the robust region of using the reverse process of the diffusion model for adversarial purification (3) In practice, we proposed DensePurebased on our theoretical analysis. We demonstrated DensePureis consistently better than existing methods on ImageNet, with 7% improvement on average.
2 PRELIMINARIES AND BACKGROUNDS
Continuous-Time Diffusion Model. The diffusion model has two components: the diffusion process followed by the reverse process. Given an input random variable x0 ∼ p, the diffusion process adds isotropic Gaussian noises to the data so that the diffused random variable at time t is xt = √ αt(x0 + ϵt), s.t., ϵt ∼ N (0, σ2t I), and σ2t = (1 − αt)/αt, and we denote xt ∼ pt. The forward diffusion process can also be defined by the stochastic differential equation
dx = h(x, t)dt+ g(t)dw, (SDE)
where x0 ∼ p, h : Rd × R 7→ Rd is the drift coefficient, g : R 7→ R is the diffusion coefficient, and w(t) ∈ Rn is the standard Wiener process. Under mild conditions B.1, the reverse process exists and removes the added noise by solving the reverse-time SDE (Anderson, 1982)
dx̂ = [h(x̂, t)− g(t)2▽x̂ log pt(x̂)]dt+ g(t)dw, (reverse-SDE)
where dt is an infinitesimal reverse time step, and w(t) is a reverse-time standard Wiener process.
In our context, we use the conventions of VP-SDE (Song et al., 2021b) where h(x; t) := − 12γ(t)x and g(t) := √ γ(t) with γ(t) positive and continuous over [0, 1], such that x(t) =
√ αtx(0) +√
1− αtϵ where αt = e− ∫ t 0 γ(s)ds and ϵ ∼ N (0, I). We use {xt}t∈[0,1] and {x̂t}t∈[0,1] to denote the diffusion process and the reverse process generated by SDE and reverse-SDE respectively, which follow the same distribution.
Discrete-Time Diffusion Model (or DDPM (Ho et al., 2020)). DDPM constructs a discrete Markov chain {x0,x1, · · · ,xi, · · · ,xN} as the forward process for the training data x0 ∼ p, such that P(xi|xi−1) = N (xi; √ 1− βixi−1, βiI), where 0 < β1 < β2 < · · · < βN < 1 are predefined
noise scales such that xN approximates the Gaussian white noise. Denote αi = ∏N
i=1(1− βi), we have P(xi|x0) = N (xi; √ αix0, (1− αi)I), i.e., xt(x0, ϵ) = √ αix0 + (1− αi)ϵ, ϵ ∼ N (0, I).
The reverse process of DDPM learns a reverse direction variational Markov chain pθ(xi−1|xi) = N (xi−1;µθ(xi, i),Σθ(xi, i)). Ho et al. (2020) defines ϵθ as a function approximator to predict ϵ from xi such that µθ(xi, i) = 1√1−βi ( xi − βi√1−αi ϵθ(xi, i) ) . Then the reverse time samples
are generated by x̂i−1 = 1√1−βi ( x̂i − βi√1−αi ϵθ∗(x̂i, i) ) + √ βiϵ, ϵ ∼ N (0, I), and the optimal
parameters θ∗ are obtained by solving θ∗ := argminθ Ex0,ϵ [ ||ϵ− ϵθ( √ αix0 + (1− αi), i)||22 ] .
Randomized Smoothing. Randomized smoothing is used to certify the robustness of a given classifier against L2-norm based perturbation. It transfers the classifier f to a smooth version g(x) = argmaxc Pϵ∼N (0,σ2I)(f(x + ϵ) = c), where g is the smooth classifier and σ is a hyperparameter of the smooth classifier g, which controls the trade-off between robustness and accuracy. Cohen et al. (2019) shows that g(x) induces the certifiable robustness for x under the L2-norm with radius R, where R = σ2 ( Φ−1(pA)− Φ−1(pB) ) ; pA and pB are probability of the most probable class and “runner-up” class respectively; Φ is the inverse of the standard Gaussian CDF. The pA and pB can be estimated with arbitrarily high confidence via Monte Carlo method (Cohen et al., 2019).
3 THEORETICAL ANALYSIS
In this section, we theoretically analyze why and how the diffusion model can enhance the robustness of a given classifier. We will analyze directly on SDE and reverse-SDE as they generate the same
stochastic processes {xt}t∈[0,T ] and the literature works establish an approximation on reverseSDE (Song et al., 2021b; Ho et al., 2020).
We first show that given a diffusion model, solving reverse-SDE will generate a conditional distribution based on the scaled adversarial sample, which will have high density on data region with high data density and near to the adversarial sample in Theorem 3.1. See detailed conditions in B.1. Theorem 3.1. Under conditions B.1, solving equation reverse-SDE starting from time t and sample xa,t = √ αtxa will generate a reversed random variable x̂0 with density P (x̂0 = x|x̂t = xa,t) ∝
p(x) · 1√ (2πσ2t ) n exp
( −||x−xa||22
2σ2t
) , where p is the data distribution, σ2t = 1−αt αt is the variance of
Gaussian noise added at time t in the diffusion process.
Proof. (sketch) Under conditions B.1, we know {xt}t∈[0,1] and {x̂t}t∈[0,1] follow the same distribution, and then the rest proof follows Bayes’ Rule.
Please see the full proofs of this and the following theorems in Appendix B.2. Remark 1. Note that P (x̂0 = x|x̂t = xa,t) > 0 if and only if p(x) > 0, thus the generated reverse sample will be on the data region where we train classifiers.
In Theorem 3.1, the conditional density P (x̂0 = x|x̂t = xa,t) is high if both p(x) and the Gaussian term have high values, i.e., x has high data density and is close to the adversarial sample xa. The latter condition is reasonable since adversarial perturbations are typically bounded due to budget constraints. So the above argument implies that a reversed sample will more likely to have the ground-truth label if data region with the ground-truth label has high data density. For the sake of theoretical analysis and understanding, we take the point with highest conditional density P (x̂0 = x|x̂t = xa,t) as the reversed sample, defined as P(xa; t) := argmaxx P (x̂0 = x|x̂t = xa,t). P(xa; t) is a representative of the high density data region in the conditional distribution and P(·; t) is a deterministic purification model. In the following, we characterize the robust region for data region with ground-truth label under P (·; t). The robust region and robust radius for a general deterministic purification model given a classifier are defined below. Definition 3.2 (Robust Region and Robust Radius). Given a classifier f and a point x0, let G(x0) := {x : f(x) = f(x0)} be the data region where samples have the same label as x0. Then given a deterministic purification model P(· ;ψ) with parameter ψ, we define the robust region of G(x0) under P and f as DfP (G(x0);ψ) := {x : f (P(x;ψ)) = f(x0)}, i.e., the set of x such that purified sample P(x;ψ) has the same label as x0 under f . Further, we define the robust radius of x0 as r f P(x0;ψ) := max { r : x0 + ru ∈ DfP (x0;ψ) , ∀||u||2 ≤ 1 } , i.e., the radius of
maximum inclined ball of DfP (x0;ψ) centered around x0. We will omit P and f when it is clear from the context and write D (G(x0);ψ) and r(x0;ψ) instead. Remark 2. In Definition 3.2, the robust region (resp. radius) is defined for each class (resp. point). When using the point with highest P (x̂0 = x|x̂t = xa,t) as the reversed sample, ψ := t.
Now given a sample x0 with ground-truth label, we are ready to characterize the robust region D (G(x0);ψ) under purification model P(·; t) and classifier f . Intuitively, if the adversarial sample xa is near to x0 (in Euclidean distance), xa keeps the same label semantics of x0 and so as the purified sample P(xa; t), which implies that f (P(xa;ψ)) = f(x0). However, the condition that xa is near to x0 is sufficient but not necessary since we can still achieve f (P(xa;ψ)) = f(x0) if xa is near to any sample x̃0 with f (P(x̃a;ψ)) = f(x0). In the following, we will show that the robust region D (G(x0);ψ) is the union of the convex robust sub-regions surrounding every x̃0 with the same label as x0. The following theorem characterizes the convex robust sub-region and robust region respectively. Theorem 3.3. Under conditions B.1 and classifier f , let x0 be the sample with ground-truth label and xa be the adversarial sample, then (i) the purified sample P(xa; t) will have the ground-truth label if xa falls into the following convex set,
Dsub (x0; t) := ⋂
{x′0:f(x′0 )̸=f(x0)}
{ xa : (xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + ||x′0 − x0||22
2
} ,
and further, (ii) the purified sample P(xa; t) will have the ground-truth label if and only if xa falls into the following set, D (G(x0); t) := ⋃ x̃0:f(x̃0)=f(x0)
Dsub (x̃0; t). In other words, D (G(x0); t) is the robust region for data region G(x0) under P(·; t) and f .
Proof. (sketch) (i). Each convex half-space defined by the inequality corresponds to a x′0 such that f(x′0) ̸= f(x0) where xa within satisfies P (x̂0 = x0|x̂t = xa,t) > P (x̂0 = x′0 | x̂t = xa,t). This implies that P(xa; t) ̸= x′0 and f (P(xa;ψ)) = f(x0). The convexity is due to that the intersection of convex sets is convex. (ii). The “if” follows directly from (i). The “only if” holds because if xa /∈ D (G(x0); t), then exists x̃1 such that f(x̃1) ̸= f(x0) and P (x̂0 = x̃1|x̂t = xa,t) > P (x̂0 = x̃0|x̂t = xa,t) ,∀x̃0 s.t. f(x̃0) = f(x0), and thus f (P(xa;ψ)) ̸= f(x0).
Remark 3. Theorem 3.3 implies that when data region G(x0) has higher data density and larger distances to data regions with other labels, it tends to have larger robust region and points in data region tends to have larger radius. Since adversarial attack typically has small magnitude, with large robust region, the adversarial sample can be recovered to the clean sample with a high probability.
In the literature, people focus more on the robust radius (lower bound) r (G(x0); t) (Cohen et al., 2019; Carlini et al., 2022), which can be obtained by finding the maximum inclined ball inside D (G(x0); t) centering x0. Note that although Dsub (x0; t) is convex, D (G(x0); t) is generally not. Therefore, finding r (G(x0); t) is a non-convex optimization problem. In particular, it can be formulated into a disjunctive optimization problem with integer indicator variables, which is typically NP-hard to solve. One alternative could be finding the maximum inclined ball in Dsub (x0; t), which can be formulated into a convex optimization problem whose optimal value provides a lower bound for r (G(x0); t). However, D (G(x0); t) has the potential to provide much larger robustness radius because it might connect different convex robust sub-regions into one, as shown in Figure 2.
⋃3
i=1Dsub(xi; t), where x0,x1,x2 are
samples with ground-truth label and x3 is a sample with another label. xa = x0+ϵa is an adversarial sample such that P(xa; t) = x1 ̸= x0 and thus the classification is correct but xa is not reversed back to x0. rsub(x0) < r(x0) shows our claim that the union leads to a larger robust radius.
In practice, we cannot guarantee to establish an exact reverse process like reverse-SDE but instead try to establish an approximate reverse process to mimic the exact one. As long as the approximate reverse process is close enough to the exact reverse process, they will generate close enough conditional distributions based on the adversarial sample. Then the density and locations of the data regions in two conditional distributions will not differ much and so is the robust region for each data region. We take the score-based diffusion model in Song et al. (2021b) for an example and demonstrate Theorem 3.4 to bound the KL-divergnece between conditional distributions generated by reverse-SDE and score-based diffusion model. Ho et al. (2020) showed that using variational inference to fit DDPM is equivalent to optimizing an objective resembling score-based diffusion model with a specific weighting scheme, so the results can be extended to DDPM. Theorem 3.4. Under score-based diffusion model Song et al. (2021b) and conditions B.1, we have DKL(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) = JSM(θ, t;λ(·)), where {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] are stochastic processes generated by reverse-SDE and score-based diffusion model respectively, JSM(θ, t;λ(·)) := 12 ∫ t 0 Epτ (x) [ λ(τ) ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ, sθ(x, τ) is the score function to approximate ∇x log pτ (x), and λ : R → R is any weighting scheme used in the training score-based diffusion models.
Proof. (sketch) Let µt and νt be the path measure for reverse processes {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively based on the xa,t. Under conditions B.1, µt and νt are uniquely defined and the KLdivergence can be computed via the Girsanov theorem Oksendal (2013).
Remark 4. Theorem 3.4 shows that if the training loss is smaller, the conditional distributions generated by reverse-SDE and score-based diffusion model are closer, and are the same if the training loss is zero. Furthermore, by the Pinsker’s inequality, the total variation (a distance metric) is upper
bounded by DTV(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) ≤ √ 1 2JSM(θ, t;λ(·)).
4 DENSEPURE
Inspired by the theoretical analysis, we introduce DensePure and show how to calculate its certified robustness radius via the randomized smoothing algorithm.
Framework. Our framework, DensePure, consists of two components: (1) an off-the-shelf diffusion model with reverse process rev and (2) an off-the-shelf base classifier f .
The pipeline of DensePure is shown in Figure 1. Given an input x, we feed it into the reverse process rev of the diffusion model to get the reversed sample rev(x) and then repeat the above process K times to get K reversed samples {rev(x)1, · · · , rev(x)K}. We feed the above K reversed samples into the classifier to get the corresponding prediction {f(rev(x)1), · · · , f(rev(x)K)} and then apply the majority vote, termed MV, on these predictions to get the final predicted label ŷ = MV({f(rev(x)1), · · · , f(rev(x)K)}) = argmaxc ∑K i=1 1{f(rev(x)i) = c} .
Certified Robustness of DensePure with Randomized Smoothing.
In this paragraph, we will illustrate the algorithm to calculate certified robustness of DensePure via RS, which offers robustness guarantees for a model under a L2-norm ball.
In particular, we follow the similar setting of Carlini et al. (2022) which uses a DDPM-based diffusion model. The overall algorithm contains three steps:
(1) Our framework estimates n, the number of steps used for the reverse process of DDPM-based diffusion model. Since Randomized Smoothing (Cohen et al., 2019) adds Gaussian noise ϵ, where ϵ ∼ N (0, σ2I), to data input x to get the randomized data input, xrs = x+ ϵ, we map between the noise required by the randomized example xrs and the noise required by the diffused data xn (i.e., xn ∼ N (xn; √ αnx0, (1 − αn)I)) with n step diffusion processing so that αn = 11+σ2 . In this way, we can compute the corresponding timestep n, where n = argmins{|αs − 11+σ2 | | s ∈ [N ]}. (2). Given the above calculated timestep n, we scale xrs with √ αn to obtain the scaled randomized
smoothing sample √ αnxrs. Then we feed √ αnxrs into the reverse process of the diffusion model by K-times to get the reversed sample set {x̂10, x̂20, · · · , x̂i0, · · · , x̂K0 }. (3). We feed the obtained reversed sample set into a standard off-the-shelf classifier f to get the corresponding predicted labels {f(x̂10), f(x̂20), . . . , f(x̂i0), . . . , f(x̂K0 )}, and apply majority vote, denoted MV(· · ·), on these predicted labels to get the final label for xrs. Fast Sampling. To calculate the reversed sample, the standard reverse process of DDPM-based models require repeatedly applying a “single-step” operation n times to get the reversed sample x̂0 (i.e., x̂0 = Reverse(· · ·Reverse(· · ·Reverse(Reverse( √ αnxrs;n);n − 1); · · · ; i); · · · 1)). Here x̂i−1 = Reverse(x̂i; i) is equivalent to sample x̂i−1 from N (x̂i−1;µθ(x̂i, i),Σθ(x̂i, i)), where µθ(x̂i, i) =
1√ 1−βi ( x̂i − βi√1−αi ϵθ(x̂i, i) ) and Σθ := exp(v log βi + (1− v) log β̃i). Here v is a
parameter learned by DDPM and β̃i = 1−αi−1 1−αi .
To reduce the time complexity, we use the uniform sub-sampling strategy from Nichol & Dhariwal (2021). We uniformly sample a subsequence with size b from the original N -step the reverse process. Note that Carlini et al. (2022) set b = 1 for the “one-shot” sampling, in this way, x̂0 =
1√ αn
(xn − √ 1− αnϵθ( √ αnxrs, n)) is a deterministic value so that the reverse process does
not obtain a posterior data distribution conditioned on the input. Instead, we can tune the number of the sub-sampled DDPM steps to be larger than one (b > 1) to sample from a posterior data distribution conditioned on the input. The details about the fast sampling are shown in appendix C.2.
5 EXPERIMENTS
In this section, we use DensePure to evaluate certified robustness on two standard datasets, CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009).
Experimental settings We follow the experimental setting from Carlini et al. (2022). Specifically, for CIFAR-10, we use the 50-M unconditional improved diffusion model from Nichol & Dhariwal (2021) as the diffusion model. We select ViT-B/16 model Dosovitskiy et al. (2020) pretrained on ImageNet-21k and finetuned on CIFAR-10 as the classifier, which could achieve 97.9% accuracy on CIFAR-10. For ImageNet, we use the unconditional 256×256 guided diffusion model from Dhariwal & Nichol (2021) as the diffusion model and pretrained BEiT large model (Bao et al., 2021) trained on ImageNet-21k as the classifier, which could achieve 88.6% top-1 accuracy on validation set of ImageNet-1k. We select three different noise levels σ ∈ {0.25, 0.5, 1.0} for certification. For the parameters of DensePure , we set K = 40 and b = 10 except the results in ablation study. The details about the baselines are in the appendix.
5.1 MAIN RESULTS
We perform DensePure on the subset of CIFAR-10 or ImageNet. We choose the same subset as in Cohen et al. (2019), 500 samples for CIFAR-10 and 100 samples for ImageNet ( the results with 500 samples are shown in the appendix D.10). The results are shown in Table 1. For CIFAR10, comparing with the models which are carefully trained with randomized smoothing techniques in an end-to-end manner (i.e., w/o off-the-shelf classifier), we observe that our method with the standard off-the-shelf classifier outperforms them at smaller ϵ = {0.25, 0.5} on both CIFAR-10 and ImageNet datasets while achieves comparable performance at larger ϵ = {0.75, 1.0}. Comparing with the non-diffusion model based methods with off-the-shelf classifier (i.e., Denoised (Salman et al., 2020) and Lee (Lee, 2021)), both our method and Carlini et al. (2022) are significantly better
than them. These results verify the non-trivial adversarial robustness improvements introduced from the diffusion model. For ImageNet, our method is consistently better than all priors with a large margin.
Since both Carlini et al. (2022) and DensePure use the diffusion model, to better understand the importance of our design, that approximates the label of the high density region in the conditional distribution, we compare DensePure with Carlini et al. (2022) in a more fine-grained manner.
We show detailed certified robustness of the model among different σ at different radius for CIFAR10 in Figure 3-left and for ImageNet in Figure 3-right. We also present our results of certified accuracy at different ϵ in Appendix D.3. From these results, we find that our method is still consistently better at most ϵ (except ϵ = 0) among different σ. The performance margin between ours and Carlini et al. (2022) will become even larger with a large ϵ. These results further indicate that although the diffusion model improves model robustness, leveraging the posterior data distribution conditioned on the input instance (like DensePure ) via reverse process instead of using single sample ((Carlini et al., 2022)) is the key for better robustness. Additionally, we use the off-the-shelf classifiers, which are the VIT-based architectures trained a larger dataset. In the later ablation study section, we select the CNN-based architecture wide-ResNet trained on standard dataset from scratch. Our method still achieves non-trivial robustness. Further, our experiments in Appendix D.7 shows that removing the diffusion model from DensePure deteriorates the performance. It further verifies that our design is non-trivial.
5.2 ABLATION STUDY
Voting samples (K) We first show how K affects the certified accuracy. For efficiency, we select b = 10. We conduct experiments for both datasets. We show the certified accuracy among different r at σ = 0.25 in Figure 4. The results for σ = 0.5, 1.0 and CIFAR-10 are shown in the Appendix D.4. Comparing with the baseline (Carlini et al., 2022), we find that a larger majority vote number leads to a better certified accuracy. It verifies that DensePure indeed benefits the adversarial robustness and making a good approximation of the label with high density region requires a large number of voting samples. We find that our certified accuracy will almost converge at r = 40. Thus, we set r = 40 for our experiments. The results with other σ show the similar tendency. To further improve the time efficiency, we can use K-Consensus (Horváth et al., 2021). It accelerates the majority vote process by 45% ∼ 60% with a negligible performance drop. The experimental details and results are in Appendix D.8.
Fast sampling steps (b) To investigate the role of b, we conduct additional experiments with b ∈ {2, 5} at σ = 0.25. The results on ImageNet are shown in Figure 4 and results for σ = 0.5, 1.0 and CIFAR-10 are shown in the Appendix D.5. By observing results with majority vote, we find that a larger b can lead to a better certified accuracy since a larger b generates images with higher quality. By observing results without majority vote, the results show opposite conclusions where a larger b leads to a lower certified accuracy, which contradicts to our intuition. We guess the potential reason is that though more sampling steps can normally lead to better image recovery quality, it also brings more randomness, increasing the probability that the reversed image locates into a data region with the wrong label. These results further verify that majority vote is necessary for a better performance.
Different architectures One advantage of DensePure is to use the off-the-shelf classifier so that it can plug in any classifier. We choose Convolutional neural network (CNN)-based architectures: Wide-ResNet28-10 (Zagoruyko & Komodakis, 2016) for CIFAR-10 with 95.1% accuracy and WideResNet50-2 for ImageNet with 81.5% top-1 accuracy, at σ = 0.25. The results are shown in Table 2 and Figure E in Appendix D.6. Results for more model architectures and σ of ImageNet are also shown in Appendix D.6. We show that our method can enhance the certified robustness of any given classifier trained on the original data distribution. Noticeably, although the performance of CNNbased classifier is lower than Transformer-based classifier, DensePure with CNN-based model as the classifier can outperform Carlini et al. (2022) with ViT-based model as the classifier (except ϵ = 0 for CIFAR-10).
6 RELATED WORK
Using an off-the-shelf generative model to purify adversarial perturbations has become an important direction in adversarial defense. Previous works have developed various purification methods based on different generative models, such as GANs (Samangouei et al., 2018), autoregressive generative models (Song et al., 2018), and energy-based models (Du & Mordatch, 2019; Grathwohl et al., 2020; Hill et al., 2021). More recently, as diffusion models (or score-based models) achieve better generation quality than other generative models (Ho et al., 2020; Dhariwal & Nichol, 2021), many works consider using diffusion models for adversarial purification (Nie et al., 2022; Wu et al., 2022; Sun et al., 2022) Although they have found good empirical results in defending against existing adversarial attacks (Nie et al., 2022), there is no provable guarantee about the robustness about such methods. On the other hand, certified defenses provide guarantees of robustness (Mirman et al., 2018; Cohen et al., 2019; Lecuyer et al., 2019; Salman et al., 2020; Horváth et al., 2021; Zhang et al., 2018; Raghunathan et al., 2018a;b; Salman et al., 2019b; Wang et al., 2021). They provide a lower bounder of model accuracy under constrained perturbations. Among them, approaches Lecuyer et al. (2019); Cohen et al. (2019); Salman et al. (2019a); Jeong & Shin (2020); Zhai et al. (2020); Horváth et al. (2021); Jeong et al. (2021); Salman et al. (2020); Lee (2021); Carlini et al. (2022) based on randomized smoothing (Cohen et al., 2019) show the great scalability and achieve promising performance on large network and dataset. The most similar work to us is Carlini et al. (2022), which uses diffusion models combined with standard classifiers for certified defense. They view diffusion model as blackbox without having a theoretical under- standing of why and how the diffusion models contribute to such nontrivial certified robustness.
7 CONCLUSION
In this work, we theoretically prove that the diffusion model could purify adversarial examples back to the corresponding clean sample with high probability, as long as the data density of the corresponding clean samples is high enough. Our theoretical analysis characterizes the conditional distribution of the reversed samples given the adversarial input, generated by the diffusion model reverse process. Using the highest density point in the conditional distribution as the deterministic reversed sample, we identify the robust region of a given instance under the diffusion model reverse process, which is potentially much larger than previous methods. Our analysis inspires us to propose an effective pipeline DensePure, for adversarial robustness. We conduct comprehensive experiments to show the effectiveness of DensePure by evaluating the certified robustness via the randomized smoothing algorithm. Note that DensePure is an off-the-shelf pipeline that does not require training a smooth classifier. Our results show that DensePure achieves the new SOTA certified robustness for perturbation with L2-norm. We hope that our work sheds light on an in-depth understanding of the diffusion model for adversarial robustness.
Limitations. The time complexity of DensePure is high since it requires repeating the reverse process multiple times. In this paper, we use fast sampling to reduce the time complexity and show that the setting (b = 2 and K = 10) can achieve nontrivial certified accuracy. We leave the more advanced fast sampling strategy as the future direction.
ETHICS STATEMENT
Our work can positively impact the society by improving the robustness and security of AI systems. We have not involved human subjects or data set releases; instead, we carefully follow the provided licenses of existing data and models for developing and evaluating our method.
8 ACKNOWLEDGMENT
We thank the support of NSF grant No.1910100, NSF CNS 2046726, C3 AI and DHS under grant No. 17STQAC00001-06-00, DARPA under grant N66001-15-C-4066, the Center for Long-Term Cybersecurity, and Berkeley Deep Drive. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors, and do not necessarily reflect the views of the sponsors.
REPRODUCIBILITY STATEMENT
For theoretical analysis, all necessary assumptions are listed in B.1 and the complete proofs are included in B.2. The experimental setting and datasets are provided in section 5. The pseudo-code for DensePure is in C.1 and the fast sampling procedures are provided in C.2.
APPENDIX
Here is the appendix.
A NOTATIONS
p data distribution
P(A) probability of event A Ck set of functions with continuous k-th derivatives w(t) standard Wiener Process
w(t) reverse-time standard Wiener Process
h(x, t) drift coefficient in SDE
g(t) diffusion coefficient in SDE
αt scaling coefficient at time t
σ2t variance of added Gaussian noise at time t
{xt}t∈[0,1] diffusion process generated by SDE {x̂t}t∈[0,1] reverse process generated by reverse-SDE pt distribution of xt and x̂t {x1,x2, . . . ,xN} diffusion process generated by DDPM {βi}Ni=1 pre-defined noise scales in DDPM ϵa adversarial attack
xa adversarial sample
xa,t scaled adversarial sample
f(·) classifier g(·) smoothed classifier P (x̂0 = x|x̂t = xa,t) density of conditional distribution generated by reverseSDE based on xa,t P(xa; t) purification model with highest density point G(x0) data region with the same label as x0 DfP(G(x0); t) robust region for G(x0) associated with base classifier f and purification model P rfP(x0; t) robust radius for the point associated with base classifier f and purification model P Dsub(x0; t) convex robust sub-region sθ(x, t) score function {xθt }t∈[0,1] reverse process generated by score-based diffusion model P ( xθ0 = x|xθt = xa,t
) density of conditional distribution generated by scorebased diffusion model based on xa,t
λ(τ) weighting scheme of training loss for score-based diffusion model
JSM(θ, t;λ(·)) truncated training loss for score-based diffusion model µt,νt path measure for {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively
B MORE DETAILS ABOUT THEORETICAL ANALYSIS
B.1 ASSUMPTIONS
(i) The data distribution p ∈ C2 and Ex∼p[||x||22] <∞.
(ii) ∀t ∈ [0, T ] : h(·, t) ∈ C1,∃C > 0,∀x ∈ Rn, t ∈ [0, T ] : ||h(x, t)||2 ⩽ C (1 + ||x||2). (iii) ∃C > 0,∀x,y ∈ Rn : ||h(x, t)− h(y, t)||2 ⩽ C∥x− y∥2. (iv) g ∈ C and ∀t ∈ [0, T ], |g(t)| > 0. (v) ∀t ∈ [0, T ] : sθ(·, t) ∈ C1,∃C > 0,∀x ∈ Rn, t ∈ [0, T ] : ||sθ(x, t)||2 ⩽ C (1 + ||x||2).
(vi) ∃C > 0,∀x,y ∈ Rn : ||sθ(x, t)− sθ(y, t)||2 ⩽ C∥x− y∥2.
B.2 THEOREMS AND PROOFS
Theorem 3.1. Under conditions B.1, solving equation reverse-SDE starting from time t and point xa,t = √ αtxa will generate a reversed random variable x̂0 with conditional distribution
P (x̂0 = x|x̂t = xa,t) ∝ p(x) · 1√
(2πσ2t ) n e
−||x−xa||22 2σ2t
where σ2t = 1−αt αt
is the variance of the Gaussian noise added at timestamp t in the diffusion process SDE.
Proof. Under the assumption, we know {xt}t∈[0,1] and {x̂t}t∈[0,1] follow the same distribution, which means
P (x̂0 = x|x̂t = xa,t) = P(x̂0 = x, x̂t = xa,t)
P(x̂t = xa,t)
= P(x0 = x,xt = xa,t)
P(xt = xa,t)
= P (x0 = x) P(xt = xa,t|x0 = x)
P(xt = xa,t)
∝ P (x0 = x) 1√
(2πσ2t ) n e
−||x−xa||22 2σ2t
= p(x) · 1√ (2πσ2t ) n e −||x−xa||22 2σ2t
where the third equation is due to the chain rule of probability and the last equation is a result of the diffusion process.
Theorem 3.3. Under conditions B.1 and classifier f , let x0 be the sample with ground-truth label and xa be the adversarial sample, then (i) the purified sample P(xa; t) will have the ground-truth label if xa falls into the following convex set, Dsub (x0; t) := ⋂
{x′0:f(x′0 )̸=f(x0)}
{ xa : (xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + ||x′0 − x0||22
2
} ,
and further, (ii) the purified sample P(xa; t) will have the ground-truth label if and only if xa falls into the following set, D (G(x0); t) := ⋃ x̃0:f(x̃0)=f(x0)
Dsub (x̃0; t). In other words, D (G(x0); t) is the robust region for data region G(x0) under P(·; t) and f .
Proof. We start with part (i).
The main idea is to prove that a point x′0 such that f(x ′ 0) ̸= f(x0) should have lower density than x0 in the conditional distribution in Theorem 3.1 so that P(xa; t) cannot be x′0. In other words, we should have
P (x̂0 = x0|x̂t = xa,t) > P (x̂0 = x′0 | x̂t = xa,t) .
By Theorem 3.1, this is equivalent to
p(x0) · 1√
(2πσ2t ) n e
−||x0−xa|| 2 2 2σ2t > p(x′0) · 1√
(2πσ2t ) n e
−||x′0−xa|| 2 2
2σ2t
⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( ||x0 − xa||22 − ||x′0 − xa||22 ) ⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( ||x0 − xa||22 − ||x′0 − x0 + x0 − xa||22 ) ⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( 2(xa − x0)⊤(x′0 − x0)− ∥x′0 − x0∥22 ) .
Re-organizing the above inequality, we obtain
(xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + 1
2 ||x′0 − x0||22.
Note that the order of xa is at most one in every term of the above inequality, so the inequality actually defines a half-space in Rn for every (x0,x′0) pair. Further, we have to satisfy the inequality for every x′0 such that f(x ′ 0) ̸= f(x0), therefore, by intersecting over all such half-spaces, we obtain a convex Dsub (x0; t). Then we prove part (ii).
On the one hand, if xa ∈ D (G(x0); t), then there exists one x̃0 such that f(x̃0) = f(x0) and xa ∈ Dsub (x̃0; t). By part (i), x̃0 has higher probability than all other points with different labels from x0 in the conditional distribution P (x̂0 = x|x̂t = xa,t) characterized by Theorem 3.1. Therefore, P(xa; t) should have the same label as x0. On the other hand, if xa /∈ D (G(x0); t), then there is a point x̃1 with different label from x0 such that for any x̃0 with the same label as x0, P (x̂0 = x̃1|x̂t = xa,t) > P (x̂0 = x̃0|x̂t = xa,t). In other words, P(xa; t) would have different label from x0.
Theorem 3.4. Under score-based diffusion model Song et al. (2021b) and conditions B.1, we can bound
DKL(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) = JSM(θ, t;λ(·))
where {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] are stochastic processes generated by reverse-SDE and scorebased diffusion model respectively,
JSM(θ, t;λ(·)) := 1
2 ∫ t 0 Epτ (x) [ λ(τ) ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ,
sθ(x, τ) is the score function to approximate∇x log pτ (x), and λ : R→ R is any weighting scheme used in the training score-based diffusion models.
Proof. Similar to proof of (Song et al., 2021a, Theorem 1), let µt and νt be the path measure for reverse processes {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively based on the scaled adversarial sample xa,t. Under conditions B.1, the KL-divergence can be computed via the Girsanov theorem Oksendal
(2013): DKL ( P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t) ) = − Eµt [ log
dνt dµt ] (i) = Eµt [∫ t 0 g(τ) (∇x log pτ (x)− sθ(x, τ)) dwτ + 1 2 ∫ t 0 g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 dτ ]
= Eµt [ 1
2 ∫ t 0 g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 dτ ]
= 1
2 ∫ τ 0 Epτ (x) [ g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ
= JSM ( θ, t; g(·)2 ) where (i) is due to Girsanov Theorem and (ii) is due to the martingale property of Itô integrals.
C MORE DETAILS ABOUT DENSEPURE
C.1 PSEUDO-CODE
We provide the pseudo code of DensePure in Algo. 1 and Alg. 2
Algorithm 1 DensePure pseudo-code with the highest density point 1: Initialization: choose off-the-shelf diffusion model and classifier f , choose ψ = t, 2: Input sample xa = x0 + ϵa 3: Compute x̂0 = P(xa;ψ) 4: ŷ = f(x̂0)
Algorithm 2 DensePure pseudo-code with majority vote 1: Initialization: choose off-the-shelf diffusion model and classifier f , choose σ 2: Compute αn = 11+σ2 , n = argmins {∣∣∣αs − 11+σ2 ∣∣∣ | s ∈ {1, 2, · · · , N}} 3: Generate input sample xrs = x0 + ϵ, ϵ ∼ N (0, σ2I) 4: Choose schedule Sb, get x̂i0 ← rev( √ αnxrs)i, i = 1, 2, . . . ,K with Fast Sampling
5: ŷ = MV({f(x̂10), . . . , f(x̂K0 )}) = argmaxc ∑K i=1 1{f(x̂i0) = c}
C.2 DETAILS ABOUT FAST SAMPLING
Applying single-step operation n times is a time-consuming process. In order to reduce the time complexity, we follow the method used in (Nichol & Dhariwal, 2021) and sample a subsequence Sb with b values (i.e., Sb = {n, ⌊n− n
b ⌋, · · · , 1}︸ ︷︷ ︸
b
, where Sbj is the j-th element in S b and Sbj =
⌊n − jnb ⌋,∀j < b and S b b = 1) from the original schedule S (i.e., S = {n, n− 1, · · · , 1}︸ ︷︷ ︸
n
, where
Sj = j is the j-th element in S).
Within this context, we adapt the original α schedule αS = {α1, · · · , αi, · · · , αn} used for singlestep to the new schedule αS b
= {αSb1 , · · · , αSbj , · · · , αSbb} (i.e., α Sb i = αSbi = αS⌊n− inb ⌋ is the
i-th element in αS b ). We calculate the corresponding βS b = {βSb1 , βS b 2 , · · · , βS b i , · · · , βS b b } and β̃S b
= {β̃Sb1 , β̃S b 2 , · · · , β̃S b i , · · · , β̃S b b } schedules, where βSbi = β Sb i = 1 − αS
b
i
αS b i−1 , β̃Sbi =
β̃S b i = 1−αS
b
i−1
1−αSbi βSbi . With these new schedules, we can use b times reverse steps to calculate
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 88.0 73.8 56.2 41.6 0.0 Carlini (Carlini et al., 2022) σ = 0.5 74.2 62.0 50.4 40.2 31.0
σ = 1.0 49.4 41.4 34.2 27.8 21.8
σ = 0.25 87.6(-0.4) 76.6(+2.8) 64.6(+8.4) 50.4(+8.8) 0.0(+0.0) Ours σ = 0.5 73.6(-0.6) 65.4(+3.4) 55.6(+5.2) 46.0(+5.8) 37.4(+6.4)
σ = 1.0 55.0(+5.6) 47.8(+6.4) 40.8(+6.6) 33.0(+5.2) 28.2(+6.4)
Table A: Certified accuracy compared with Carlini et al. (2022) for CIFAR-10 at all σ. The numbers in the bracket are the difference of certified accuracy between two methods. Our diffusion model and classifier are the same as Carlini et al. (2022).
x̂0 = Reverse(· · ·Reverse(Reverse(xn;Sbb);Sbb−1); · · · ; 1)︸ ︷︷ ︸ b . Since Σθ(xSbi , S b i ) is parameterized as a range between βS b and β̃S b
, it will automatically be rescaled. Thus, x̂Sbi−1 = Reverse(x̂Sbi ;S b i )
is equivalent to sample xSbi−1 from N (xSbi−1 ;µθ(xSbi , S b i ),Σθ(xSbi , S b i )).
D MORE EXPERIMENTAL DETAILS AND RESULTS
D.1 IMPLEMENTATION DETAILS
We select three different noise levels σ ∈ {0.25, 0.5, 1.0} for certification. For the parameters of DensePure , The sampling numbers when computing the certified radius are n = 100, 000 for CIFAR-10 and n = 10, 000 for ImageNet. We evaluate the certified robustness on 500 samples subset of CIFAR-10 testset and 100 samples subset of ImageNet validation set. we set K = 40 and b = 10 except the results in ablation study.
D.2 BASELINES.
We select randomized smoothing based methods including PixelDP (Lecuyer et al., 2019), RS (Cohen et al., 2019), SmoothAdv (Salman et al., 2019a), Consistency (Jeong & Shin, 2020), MACER (Zhai et al., 2020), Boosting (Horváth et al., 2021) , SmoothMix (Jeong et al., 2021), Denoised (Salman et al., 2020), Lee (Lee, 2021), Carlini (Carlini et al., 2022) as our baselines. Among them, PixelDP, RS, SmoothAdv, Consistency, MACER, and SmoothMix require training a smooth classifier for a better certification performance while the others do not. Salman et al. and Lee use the off-the-shelf classifier but without using the diffusion model. The most similar one compared with us is Carlini et al., which also uses both the off-the-shelf diffusion model and classifier. The above two settings mainly refer to Carlini et al. (2022), which makes us easier to compared with their results.
D.3 MAIN RESULTS FOR CERTIFIED ACCURACY
We compare with Carlini et al. (2022) in a more fine-grained version. We provide results of certified accuracy at different ϵ in Table A for CIFAR-10 and Table B for ImageNet. We include the accuracy difference between ours and Carlini et al. (2022) in the bracket in Tables. We can observe from the tables that the certified accuracy of our method outperforms Carlini et al. (2022) except ϵ = 0 at σ = 0.25, 0.5 for CIFAR-10.
D.4 EXPERIMENTS FOR VOTING SAMPLES
Here we provide more experiments with σ ∈ {0.5, 1.0} and b = 10 for different voting samplesK in Figure A and Figure B. The results for CIFAR-10 is in Figure G. We can draw the same conclusion mentioned in the main context .
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 77.0 71.0 0.0 0.0 0.0 0.0 Carlini (Carlini et al., 2022) σ = 0.5 74.0 67.0 54.0 46.0 0.0 0.0
σ = 1.0 59.0 53.0 49.0 38.0 29.0 22.0
σ = 0.25 80.0(+3.0) 76.0(+5.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) Ours σ = 0.5 75.0(+1.0) 72.0(+5.0) 62.0(+8.0) 49.0(+3.0) 0.0(+0.0) 0.0(+0.0)
σ = 1.0 61.0(+2.0) 57.0(+4.0) 53.0(+4.0) 49.0(+11.0) 37.0(+8.0) 26.0(+4.0)
Table B: Certified accuracy compared with Carlini et al. (2022) for ImageNet at all σ. The numbers in the bracket are the difference of certified accuracy between two methods. Our diffusion model and classifier are the same as Carlini et al. (2022).
CIFAR=10 ImageNet
Figure A: Certified accuracy among different vote numbers with different radius. Each line in the figure represents the certified accuracy among different vote numbers K with Gaussian noise σ = 0.50.
D.5 EXPERIMENTS FOR FAST SAMPLING STEPS
We also implement additional experiments with b ∈ {1, 2, 10} at σ = 0.5, 1.0. The results are shown in Figure C and Figure D. The results for CIFAR-10 are in Figure G. We draw the same conclusion as mentioned in the main context.
D.6 EXPERIMENTS FOR DIFFERENT ARCHITECTURES
We try different model architectures of ImageNet including Wide ResNet-50-2 and ResNet 152 with b = 2 andK = 10. The results are shown in Figure F. We find that our method outperforms (Carlini et al., 2022) for all σ among different classifiers.
D.7 EXPERIMENTS FOR RANDOMIZED SMOOTHING WITHOUT DIFFUSION MODEL
To show the effectiveness of our diffusion model design, we remove the diffusion model from our pipeline and conduct experiments. Specifically, first, we remove the diffusion model and perform randomized smoothing only on the pretrained classifier that we used in DensePure (i.e., ViT-B/16 for CIFAR-10 and BEiT for ImageNet). The results are shown in Table C and Table D. The number in the bracket is the robust accuracy of pretrained classifier - the robust accuracy of DensePure. From the result, we conclude that without the help of diffusion models, neither ViT nor BEiT could reach high certified accuracy.
Second, we conduct additional experiments to fairly compare with randomized smoothing without diffusion models under majority vote settings. Specifically, we activate droppath in BEiT at the inference stage to support majority votes. The other settings are the same as DensePure. The results are shown in Table E. The number in the bracket is calculated by the robust accuracy of BeiT with majority votes - the robust accuracy of DensePure. We find that simply performing majority votes on the BeiT classifier will not result in higher certified robustness.
CIFAR=10 ImageNet
Figure B: Certified accuracy among different vote numbers with different radius. Each line in the figure represents the certified accuracy among different vote numbers K with Gaussian noise σ = 1.00.
CIFAR=10 ImageNet
Figure C: Certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 0.50.
Third, to compare with randomized smoothing without diffusion model, we also evaluate certified accuracy with Gaussian augmentation-trained ViT models on CIFAR-10. The results shown in the table F prove that DensePure can still achieve higher certified accuracy than randomized smoothing on even Gaussian augmented models without diffusion models. The numbers in the bracket are the difference between the robust accuracy of Gaussian augmentation randomized smoothing and DensePure.
D.8 EXPERIMENTS FOR K-CONSENSUS AGGREGATION
To improve the efficient of our algorithm, we try the K-consensus Aggregation, where an early stop will be triggered if the classification results of the K consecutive reversed samples are the same. Here we calculate the certified robustness for 100 subsamples of CIFAR-10 and ImageNet with 2 sampling steps, a maximum 10 majority votes and consensus threshold k=3. Results are shown in Table G and Table H. The column of ”Avg MV” in the tables means the average of the actual number of majority votes required for our algorithm. For instance, if the predicted labels of the first 3 reversed samples are the same, the actual majority vote numbers will be 3. The numbers in the bracket are the difference between certified accuracy w/o K-Consensus Aggregation.
D.9 EXPERIMENTS FOR CERTIFIED ACCURACY WITH LESS SAMPLING STEPS AND VOTE NUMBERS
We also conduct additional experiments with 2 sampling steps and 5 majority votes. The results are shown in Table I. We find that our method still achieves better results than the existing method.
CIFAR=10 ImageNet
Figure D: Certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 1.00.
CIFAR=10 ImageNet
Figure E: Certified accuracy with different architectures. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 0.25.
D.10 EXPERIMENTS FOR DENSEPURE 500 TEST SAMPLING NUMBER RESULTS ON IMAGENET
We increase the ImageNet test sampling number from 100 to 500 and update the experiment results in Table J and Table K. We can draw the similar conclusion.
Wide ResNet-50-2 ResNet152 Figure F: Certified accuracy of ImageNet for different architectures. The lines represent the certified accuracy with different L2 perturbation bound with different Gaussian noise σ ∈ {0.25, 0.50, 1.00}.
ImageNet ImageNet
Figure G: Ablation study. The left image shows the certified accuracy among different vote numbers with different radius ϵ ∈ {0.0, 0.25, 0.5, 0.75}. Each line in the figure represents the certified accuracy of our method among different vote numbers K with Gaussian noise σ = 0.25. The right image shows the certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound.
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 20.8(-66.8) 7.4(-69.2) 1.8(-62.8) 0.2(-50.2) 0.0(+0.0) σ = 0.5 11.6(-62.0) 6.6(-58.8) 3.8(-51.8) 1.2(-44.8) 0.2(-37.2) σ = 1.0 10.6(-44.4) 10.6(-37.4) 9.4(-31.4) 9.4(-23.6) 9.4(-18.8)
Table C: Certified accuracy of randomized smoothing on pretrained classifier ViT-B/16 at all σ for CIFAR-10
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 73.2(-10.8) 55.8(-22.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) σ = 0.5 7.8(-72.4) 4.6(-71.0) 3.2(-63.8) 1.0(-53.6) 0.0(+0.0) 0.0(+0.0) σ = 1.0 0.0(-67.8) 0.0(-61.4) 0.0(-55.6) 0.0(-50.0) 0.0(-42.2) 0.0(-25.8)
Table D: Certified accuracy of randomized smoothing on pretrained classifier BEiT at all σ for ImageNet
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 73.8(-10.2) 58.0(-19.8) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) σ = 0.5 9.0(-71.2) 7.0(-68.6) 4.0(-63.0) 2.0(-52.6) 0.0(+0.0) 0.0(+0.0) σ = 1.0 0.0(-67.8) 0.0(-61.4) 0.0(-55.6) 0.0(-50.0) 0.0(-42.2) 0.0(-25.8)
Table E: Certified accuracy of randomized smoothing on droppatch activated BEiT with 10 majority votes at all σ for ImageNet
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 88.2(+0.6) 71.4(-5.2) 53.2(-11.4) 35.2(-15.2) 0.0(+0.0) σ = 0.5 69.8(-3.8) 60.0(-5.4) 48.4(-7.2) 37.2(-8.8) 27.2(-10.2) σ = 1.0 49.0(-6.0) 41.8(-6.0) 34.0(-6.8) 27.0(-6.0) 22.0(-6.2)
Table F: Certified accuracy of randomized smoothing on Gaussian augmentation-trained ViT at all σ on CIFAR-10
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0 Avg MV
σ = 0.25 92(+0.0) 77(+0.0) 60(+0.0) 48(-1.0) 0(+0.0) 3.84 σ = 0.5 74(+0.0) 65(+0.0) 53(-1.0) 45(+0.0) 40(+0.0) 4.43 σ = 1.0 53(+0.0) 46(+0.0) 42(+0.0) 31(+0.0) 25(+0.0) 5.49
Table G: Certified accuracy and average majority votes with 2 sample steps and k = 3 consensus threshold at all σ for CIFAR-10.
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0 Avg MV
σ = 0.25 78(+0.0) 74(+0.0) 0(+0.0) 0(+0.0) 0(+0.0) 0(+0.0) 3.34 σ = 0.5 75(+0.0) 69(+0.0) 61(+0.0) 47(+0.0) 0(+0.0) 0(+0.0) 3.89 σ = 1.0 60(+0.0) 54(+0.0) 50(+0.0) 41(+0.0) 32(+0.0) 23(+0.0) 5.23
Table H: Certified accuracy and average majority votes with 2 sample steps and k = 3 consensus threshold at all σ for ImageNet.
Certified Accuracy at ϵ(%) CIFAR-10 ImageNet
Method Off-the-shelf 0.25 0.5 0.75 1.0 0.5 1.0 1.5 2.0 3.0
PixelDP (Lecuyer et al., 2019) ✗ (71.0)22.0 (44.0)2.0 - - (33.0)16.0 - - - - RS (Cohen et al., 2019) ✗ (75.0)61.0 (75.0)43.0 (65.0)32.0 (65.0)23.0 (67.0)49.0 (57.0)37.0 (57.0)29.0 (44.0)19.0 (44.0)12.0 SmoothAdv (Salman et al., 2019a) ✗ (82.0)68.0 (76.0)54.0 (68.0)41.0 (64.0)32.0 (63.0)54.0 (56.0)42.0 (56.0)34.0 (41.0)26.0 (41.0)18.0 Consistency (Jeong & Shin, 2020) ✗ (77.8)68.8 (75.8)58.1 (72.9)48.5 (52.3)37.8 (55.0)50.0 (55.0)44.0 (55.0)34.0 (41.0)24.0 (41.0)17.0 MACER (Zhai et al., 2020) ✗ (81.0)71.0 (81.0)59.0 (66.0)46.0 (66.0)38.0 (68.0)57.0 (64.0)43.0 (64.0)31.0 (48.0)25.0 (48.0)14.0 Boosting (Horváth et al., 2021) ✗ (83.4)70.6 (76.8)60.4 (71.6)52.4 (73.0)38.8 (65.6)57.0 (57.0)44.6 (57.0)38.4 (44.6)28.6 (38.6)21.2 SmoothMix (Jeong et al., 2021) ✓ (77.1)67.9 (77.1)57.9 (74.2)47.7 (61.8)37.2 (55.0)50.0 (55.0)43.0 (55.0)38.0 (40.0)26.0 (40.0)17.0
Denoised (Salman et al., 2020) ✓ (72.0)56.0 (62.0)41.0 (62.0)28.0 (44.0)19.0 (60.0)33.0 (38.0)14.0 (38.0)6.0 - - Lee (Lee, 2021) ✓ 60.0 42.0 28.0 19.0 41.0 24.0 11.0 - - Carlini (Carlini et al., 2022) ✓ (88.0)73.8 (88.0)56.2 (88.0)41.6 (74.2)31.0 (82.0)74.0 (77.2.0)59.8 (77.2)47.0 (64.6)31.0 (64.6)19.0 Ours ✓ (87.6)76.6 (87.6)64.6 (87.6)50.4 (73.6)37.4 (84.0)77.8 (80.2)67.0 (80.2)54.6 (67.8)42.2 (67.8)25.8
Table J: Certified accuracy compared with existing works. The certified accuracy at ϵ = 0 for each model is in the parentheses. The certified accuracy for each cell is from the respective papers except Carlini et al. (2022). Our diffusion model and classifier are the same as Carlini et al. (2022), where the off-the-shelf classifier uses ViT-based architectures trained on a large dataset (ImageNet-22k).
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 82.0 74.0 0.0 0.0 0.0 0.0 Carlini (Carlini et al., 2022) σ = 0.5 77.2 71.8 59.8 47.0 0.0 0.0
σ = 1.0 64.6 57.8 49.2 40.6 31.0 19.0
σ = 0.25 84.0(+2.0) 77.8(+3.8) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) Ours σ = 0.5 80.2(+3.0) 75.6(+3.8) 67.0(+7.2) 54.6(+7 | 1. What is the focus and contribution of the paper on improving certified robustness?
2. What are the strengths of the proposed method, particularly in terms of using a diffusion model?
3. What are the weaknesses of the paper regarding its originality and significance compared to prior works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new method, DensePure, designed to improve the certified robustness of a pretrained model (i.e. classifier). Specifically, DensePure uses the diffusion model to denoise the adversarial input to get multiple reversed samples, which are then passed through the off-the-shelf classifier, followed by majority voting of inferred labels to make the final prediction. The extensive experiments demonstrate the effectiveness of DensePure by evaluating its certified robustness given a standard model and show it is consistently better than existing methods on ImageNet.
Strengths And Weaknesses
Quality/Clarity: the paper is well written and the techniques presented are easy to follow. Its motivation is clear to use diffusion model to denoise input to improve classification accuracy. On a technical level, I do not see much new contribution, where most equations are from diffusion model. For the BEiT in Table 2, do we use the same BEiT model for evaluation while comparing to Carlini 2022? We do not know its gain from the model itself or from the label voting. Also it is better to add experimental comparision w/o diffusion model.
Originality/significance: the idea is interesting, which uses diffusion model to improve data quality to improve classification performance. Another contribution is the label voting from multiple samples. However, both diffusion model and pretrained classifier are known, which makes DensePure an incremental approach. In addition, the diffusion model is only for Gaussian noise, which limit the application of this approach.
Clarity, Quality, Novelty And Reproducibility
The paper is clear and easy to follow. The idea to include denoising step with diffusion model to improve classification performance is incremental. And more experiments are needed in Table 2 (see above for details) |
ICLR | Title
DensePure: Understanding Diffusion Models for Adversarial Robustness
Abstract
Diffusion models have been recently employed to improve certified robustness through the process of denoising. However, the theoretical understanding of why diffusion models are able to improve the certified robustness is still lacking, preventing from further improvement. In this study, we close this gap by analyzing the fundamental properties of diffusion models and establishing the conditions under which they can enhance certified robustness. This deeper understanding allows us to propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. classifier). Given an (adversarial) input, DensePure consists of multiple runs of denoising via the reverse process of the diffusion model (with different random seeds) to get multiple reversed samples, which are then passed through the classifier, followed by majority voting of inferred labels to make the final prediction. This design of using multiple runs of denoising is informed by our theoretical analysis of the conditional distribution of the reversed sample. Specifically, when the data density of a clean sample is high, its conditional density under the reverse process in a diffusion model is also high; thus sampling from the latter conditional distribution can purify the adversarial example and return the corresponding clean sample with a high probability. By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model’s reverse process. We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works. In practice, DensePure can approximate the label of the high density region in the conditional distribution so that it can enhance certified robustness. We conduct extensive experiments to demonstrate the effectiveness of DensePure by evaluating its certified robustness given a standard model via randomized smoothing. We show that DensePure is consistently better than existing methods on ImageNet, with 7% improvement on average. Project page:https://densepure.github.io/.
1 INTRODUCTION
Diffusion models have been shown to be a powerful image generation tool (Ho et al., 2020; Song et al., 2021b) owing to their iterative diffusion and denoising processes. These models have achieved state-of-the-art performance on sample quality (Dhariwal & Nichol, 2021; Vahdat et al., 2021) as well as effective mode coverage (Song et al., 2021a). A diffusion model usually consists of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time (Song et al., 2021b).
Given the natural denoising property of diffusion models, empirical studies have leveraged them for adversarial purification (Nie et al., 2022; Wu et al., 2022; Carlini et al., 2022). For instance, Nie et al. (2022) employed diffusion models for model purification, DiffPure. They empirically show that by carefully choosing the amount of Gaussian noises added during the diffusion process, adversarial
∗the first four authors contributed equally
perturbations can be removed while preserving the true label semantics. Despite the significant empirical result, there is no provable guarantee of the achieved robustness. A concurrent work (Carlini et al., 2022) instantiated the randomized smoothing approach with the diffusion model to offer a provable guarantee of model robustness against L2-norm bounded adversarial example. However, they do not provide a theoretical understanding of why and how diffusion models contribute to such nontrivial certified robustness.
Our Approach. We are the first to theoretically analyze the fundamental properties of diffusion models to understand why and how diffusion models enhance certified robustness. This deeper understanding allows us to propose a new method DensePure to improve the certified robustness of any given classifier more effectively using diffusion models. An illustration of the DensePure framework is provided in Figure 1, where it consists of a pretrained diffusion model and a pretrained classifier. DensePure incorporates two steps: (i) using the reverse process of the diffusion model to obtain a sample of the posterior data distribution conditioned on the adversarial input; and (ii) repeating the reverse process multiple times with different random seeds to approximate the label of the high-density region in the conditional distribution via a simple majority vote strategy. In particular, given an adversarial input, we repeatedly feed it into the reverse process of the diffusion model to get multiple reversed examples and feed them into the classifier to calculate their labels. We then apply the majority vote on the set of labels to get the final predicted label.
DensePure is inspired by our theoretical analysis, where we show that the reverse process of the diffusion model provides a conditional distribution of the reversed sample given an adversarial input. Sampling from this conditional distribution can enhance the certified robustness. Specifically, we prove that when the data density of clean samples is high, it is a sufficient condition for the conditional density of the reversed samples to be also high. Therefore, in DensePure, samples from the conditional distribution can recover the ground-truth labels with a high probability.
For understanding and rigorous analysis conveniently, we use the highest density point in the conditional distribution as the deterministic reversed sample for the classifier prediction. We show that the robust region for a given sample under the diffusion model’s reverse process is the union of multiple convex sets, each surrounding a region around the ground-truth label. Compared with the robust region of previous work (Cohen et al., 2019), which only focuses on only one region with the ground-truth label, such the union of multiple convex sets has the potential to provide a much larger robust region, resulting in higher certified robustness. Moreover, the characterization implies that the size of robust regions is affected by the relative density and the distance between data regions with the ground-truth label and those with other labels.
We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings to evaluate the certifiable robustness of DensePure. In particular, we follow the setting from Carlini et al. (2022) and rely on randomized smoothing to certify the robustness of the adversarial perturbations bounded in the L2-norm. We show that DensePure achieves a new state-of-the-art certified robustness on the standard pretrained model without further tuning any model parameters (e.g., smooth augmentation Cohen et al. (2019)). On ImageNet, it achieves a consistently higher certified accuracy, 7% improvement on average, than the existing methods among every σ at every radius ϵ .
Technical Contributions. In this paper, we take the first step to understand why and how diffusion models contribute to certified robustness. We make contributions on both theoretical and empirical fronts: (1)in theory, we prove that an adversarial example can be recovered back to the original clean sample with a high probability via the reverse process of a diffusion model. (2) In theory, we characterized the robust region for each point by further taking the highest density point in the conditional
distribution generated by the reverse process as the reversed sample. We show that the robust region for a given sample under the diffusion model’s reverse process has the potential to provide a larger robust region. To the best of our knowledge, this is the first work that characterizes the robust region of using the reverse process of the diffusion model for adversarial purification (3) In practice, we proposed DensePurebased on our theoretical analysis. We demonstrated DensePureis consistently better than existing methods on ImageNet, with 7% improvement on average.
2 PRELIMINARIES AND BACKGROUNDS
Continuous-Time Diffusion Model. The diffusion model has two components: the diffusion process followed by the reverse process. Given an input random variable x0 ∼ p, the diffusion process adds isotropic Gaussian noises to the data so that the diffused random variable at time t is xt = √ αt(x0 + ϵt), s.t., ϵt ∼ N (0, σ2t I), and σ2t = (1 − αt)/αt, and we denote xt ∼ pt. The forward diffusion process can also be defined by the stochastic differential equation
dx = h(x, t)dt+ g(t)dw, (SDE)
where x0 ∼ p, h : Rd × R 7→ Rd is the drift coefficient, g : R 7→ R is the diffusion coefficient, and w(t) ∈ Rn is the standard Wiener process. Under mild conditions B.1, the reverse process exists and removes the added noise by solving the reverse-time SDE (Anderson, 1982)
dx̂ = [h(x̂, t)− g(t)2▽x̂ log pt(x̂)]dt+ g(t)dw, (reverse-SDE)
where dt is an infinitesimal reverse time step, and w(t) is a reverse-time standard Wiener process.
In our context, we use the conventions of VP-SDE (Song et al., 2021b) where h(x; t) := − 12γ(t)x and g(t) := √ γ(t) with γ(t) positive and continuous over [0, 1], such that x(t) =
√ αtx(0) +√
1− αtϵ where αt = e− ∫ t 0 γ(s)ds and ϵ ∼ N (0, I). We use {xt}t∈[0,1] and {x̂t}t∈[0,1] to denote the diffusion process and the reverse process generated by SDE and reverse-SDE respectively, which follow the same distribution.
Discrete-Time Diffusion Model (or DDPM (Ho et al., 2020)). DDPM constructs a discrete Markov chain {x0,x1, · · · ,xi, · · · ,xN} as the forward process for the training data x0 ∼ p, such that P(xi|xi−1) = N (xi; √ 1− βixi−1, βiI), where 0 < β1 < β2 < · · · < βN < 1 are predefined
noise scales such that xN approximates the Gaussian white noise. Denote αi = ∏N
i=1(1− βi), we have P(xi|x0) = N (xi; √ αix0, (1− αi)I), i.e., xt(x0, ϵ) = √ αix0 + (1− αi)ϵ, ϵ ∼ N (0, I).
The reverse process of DDPM learns a reverse direction variational Markov chain pθ(xi−1|xi) = N (xi−1;µθ(xi, i),Σθ(xi, i)). Ho et al. (2020) defines ϵθ as a function approximator to predict ϵ from xi such that µθ(xi, i) = 1√1−βi ( xi − βi√1−αi ϵθ(xi, i) ) . Then the reverse time samples
are generated by x̂i−1 = 1√1−βi ( x̂i − βi√1−αi ϵθ∗(x̂i, i) ) + √ βiϵ, ϵ ∼ N (0, I), and the optimal
parameters θ∗ are obtained by solving θ∗ := argminθ Ex0,ϵ [ ||ϵ− ϵθ( √ αix0 + (1− αi), i)||22 ] .
Randomized Smoothing. Randomized smoothing is used to certify the robustness of a given classifier against L2-norm based perturbation. It transfers the classifier f to a smooth version g(x) = argmaxc Pϵ∼N (0,σ2I)(f(x + ϵ) = c), where g is the smooth classifier and σ is a hyperparameter of the smooth classifier g, which controls the trade-off between robustness and accuracy. Cohen et al. (2019) shows that g(x) induces the certifiable robustness for x under the L2-norm with radius R, where R = σ2 ( Φ−1(pA)− Φ−1(pB) ) ; pA and pB are probability of the most probable class and “runner-up” class respectively; Φ is the inverse of the standard Gaussian CDF. The pA and pB can be estimated with arbitrarily high confidence via Monte Carlo method (Cohen et al., 2019).
3 THEORETICAL ANALYSIS
In this section, we theoretically analyze why and how the diffusion model can enhance the robustness of a given classifier. We will analyze directly on SDE and reverse-SDE as they generate the same
stochastic processes {xt}t∈[0,T ] and the literature works establish an approximation on reverseSDE (Song et al., 2021b; Ho et al., 2020).
We first show that given a diffusion model, solving reverse-SDE will generate a conditional distribution based on the scaled adversarial sample, which will have high density on data region with high data density and near to the adversarial sample in Theorem 3.1. See detailed conditions in B.1. Theorem 3.1. Under conditions B.1, solving equation reverse-SDE starting from time t and sample xa,t = √ αtxa will generate a reversed random variable x̂0 with density P (x̂0 = x|x̂t = xa,t) ∝
p(x) · 1√ (2πσ2t ) n exp
( −||x−xa||22
2σ2t
) , where p is the data distribution, σ2t = 1−αt αt is the variance of
Gaussian noise added at time t in the diffusion process.
Proof. (sketch) Under conditions B.1, we know {xt}t∈[0,1] and {x̂t}t∈[0,1] follow the same distribution, and then the rest proof follows Bayes’ Rule.
Please see the full proofs of this and the following theorems in Appendix B.2. Remark 1. Note that P (x̂0 = x|x̂t = xa,t) > 0 if and only if p(x) > 0, thus the generated reverse sample will be on the data region where we train classifiers.
In Theorem 3.1, the conditional density P (x̂0 = x|x̂t = xa,t) is high if both p(x) and the Gaussian term have high values, i.e., x has high data density and is close to the adversarial sample xa. The latter condition is reasonable since adversarial perturbations are typically bounded due to budget constraints. So the above argument implies that a reversed sample will more likely to have the ground-truth label if data region with the ground-truth label has high data density. For the sake of theoretical analysis and understanding, we take the point with highest conditional density P (x̂0 = x|x̂t = xa,t) as the reversed sample, defined as P(xa; t) := argmaxx P (x̂0 = x|x̂t = xa,t). P(xa; t) is a representative of the high density data region in the conditional distribution and P(·; t) is a deterministic purification model. In the following, we characterize the robust region for data region with ground-truth label under P (·; t). The robust region and robust radius for a general deterministic purification model given a classifier are defined below. Definition 3.2 (Robust Region and Robust Radius). Given a classifier f and a point x0, let G(x0) := {x : f(x) = f(x0)} be the data region where samples have the same label as x0. Then given a deterministic purification model P(· ;ψ) with parameter ψ, we define the robust region of G(x0) under P and f as DfP (G(x0);ψ) := {x : f (P(x;ψ)) = f(x0)}, i.e., the set of x such that purified sample P(x;ψ) has the same label as x0 under f . Further, we define the robust radius of x0 as r f P(x0;ψ) := max { r : x0 + ru ∈ DfP (x0;ψ) , ∀||u||2 ≤ 1 } , i.e., the radius of
maximum inclined ball of DfP (x0;ψ) centered around x0. We will omit P and f when it is clear from the context and write D (G(x0);ψ) and r(x0;ψ) instead. Remark 2. In Definition 3.2, the robust region (resp. radius) is defined for each class (resp. point). When using the point with highest P (x̂0 = x|x̂t = xa,t) as the reversed sample, ψ := t.
Now given a sample x0 with ground-truth label, we are ready to characterize the robust region D (G(x0);ψ) under purification model P(·; t) and classifier f . Intuitively, if the adversarial sample xa is near to x0 (in Euclidean distance), xa keeps the same label semantics of x0 and so as the purified sample P(xa; t), which implies that f (P(xa;ψ)) = f(x0). However, the condition that xa is near to x0 is sufficient but not necessary since we can still achieve f (P(xa;ψ)) = f(x0) if xa is near to any sample x̃0 with f (P(x̃a;ψ)) = f(x0). In the following, we will show that the robust region D (G(x0);ψ) is the union of the convex robust sub-regions surrounding every x̃0 with the same label as x0. The following theorem characterizes the convex robust sub-region and robust region respectively. Theorem 3.3. Under conditions B.1 and classifier f , let x0 be the sample with ground-truth label and xa be the adversarial sample, then (i) the purified sample P(xa; t) will have the ground-truth label if xa falls into the following convex set,
Dsub (x0; t) := ⋂
{x′0:f(x′0 )̸=f(x0)}
{ xa : (xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + ||x′0 − x0||22
2
} ,
and further, (ii) the purified sample P(xa; t) will have the ground-truth label if and only if xa falls into the following set, D (G(x0); t) := ⋃ x̃0:f(x̃0)=f(x0)
Dsub (x̃0; t). In other words, D (G(x0); t) is the robust region for data region G(x0) under P(·; t) and f .
Proof. (sketch) (i). Each convex half-space defined by the inequality corresponds to a x′0 such that f(x′0) ̸= f(x0) where xa within satisfies P (x̂0 = x0|x̂t = xa,t) > P (x̂0 = x′0 | x̂t = xa,t). This implies that P(xa; t) ̸= x′0 and f (P(xa;ψ)) = f(x0). The convexity is due to that the intersection of convex sets is convex. (ii). The “if” follows directly from (i). The “only if” holds because if xa /∈ D (G(x0); t), then exists x̃1 such that f(x̃1) ̸= f(x0) and P (x̂0 = x̃1|x̂t = xa,t) > P (x̂0 = x̃0|x̂t = xa,t) ,∀x̃0 s.t. f(x̃0) = f(x0), and thus f (P(xa;ψ)) ̸= f(x0).
Remark 3. Theorem 3.3 implies that when data region G(x0) has higher data density and larger distances to data regions with other labels, it tends to have larger robust region and points in data region tends to have larger radius. Since adversarial attack typically has small magnitude, with large robust region, the adversarial sample can be recovered to the clean sample with a high probability.
In the literature, people focus more on the robust radius (lower bound) r (G(x0); t) (Cohen et al., 2019; Carlini et al., 2022), which can be obtained by finding the maximum inclined ball inside D (G(x0); t) centering x0. Note that although Dsub (x0; t) is convex, D (G(x0); t) is generally not. Therefore, finding r (G(x0); t) is a non-convex optimization problem. In particular, it can be formulated into a disjunctive optimization problem with integer indicator variables, which is typically NP-hard to solve. One alternative could be finding the maximum inclined ball in Dsub (x0; t), which can be formulated into a convex optimization problem whose optimal value provides a lower bound for r (G(x0); t). However, D (G(x0); t) has the potential to provide much larger robustness radius because it might connect different convex robust sub-regions into one, as shown in Figure 2.
⋃3
i=1Dsub(xi; t), where x0,x1,x2 are
samples with ground-truth label and x3 is a sample with another label. xa = x0+ϵa is an adversarial sample such that P(xa; t) = x1 ̸= x0 and thus the classification is correct but xa is not reversed back to x0. rsub(x0) < r(x0) shows our claim that the union leads to a larger robust radius.
In practice, we cannot guarantee to establish an exact reverse process like reverse-SDE but instead try to establish an approximate reverse process to mimic the exact one. As long as the approximate reverse process is close enough to the exact reverse process, they will generate close enough conditional distributions based on the adversarial sample. Then the density and locations of the data regions in two conditional distributions will not differ much and so is the robust region for each data region. We take the score-based diffusion model in Song et al. (2021b) for an example and demonstrate Theorem 3.4 to bound the KL-divergnece between conditional distributions generated by reverse-SDE and score-based diffusion model. Ho et al. (2020) showed that using variational inference to fit DDPM is equivalent to optimizing an objective resembling score-based diffusion model with a specific weighting scheme, so the results can be extended to DDPM. Theorem 3.4. Under score-based diffusion model Song et al. (2021b) and conditions B.1, we have DKL(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) = JSM(θ, t;λ(·)), where {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] are stochastic processes generated by reverse-SDE and score-based diffusion model respectively, JSM(θ, t;λ(·)) := 12 ∫ t 0 Epτ (x) [ λ(τ) ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ, sθ(x, τ) is the score function to approximate ∇x log pτ (x), and λ : R → R is any weighting scheme used in the training score-based diffusion models.
Proof. (sketch) Let µt and νt be the path measure for reverse processes {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively based on the xa,t. Under conditions B.1, µt and νt are uniquely defined and the KLdivergence can be computed via the Girsanov theorem Oksendal (2013).
Remark 4. Theorem 3.4 shows that if the training loss is smaller, the conditional distributions generated by reverse-SDE and score-based diffusion model are closer, and are the same if the training loss is zero. Furthermore, by the Pinsker’s inequality, the total variation (a distance metric) is upper
bounded by DTV(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) ≤ √ 1 2JSM(θ, t;λ(·)).
4 DENSEPURE
Inspired by the theoretical analysis, we introduce DensePure and show how to calculate its certified robustness radius via the randomized smoothing algorithm.
Framework. Our framework, DensePure, consists of two components: (1) an off-the-shelf diffusion model with reverse process rev and (2) an off-the-shelf base classifier f .
The pipeline of DensePure is shown in Figure 1. Given an input x, we feed it into the reverse process rev of the diffusion model to get the reversed sample rev(x) and then repeat the above process K times to get K reversed samples {rev(x)1, · · · , rev(x)K}. We feed the above K reversed samples into the classifier to get the corresponding prediction {f(rev(x)1), · · · , f(rev(x)K)} and then apply the majority vote, termed MV, on these predictions to get the final predicted label ŷ = MV({f(rev(x)1), · · · , f(rev(x)K)}) = argmaxc ∑K i=1 1{f(rev(x)i) = c} .
Certified Robustness of DensePure with Randomized Smoothing.
In this paragraph, we will illustrate the algorithm to calculate certified robustness of DensePure via RS, which offers robustness guarantees for a model under a L2-norm ball.
In particular, we follow the similar setting of Carlini et al. (2022) which uses a DDPM-based diffusion model. The overall algorithm contains three steps:
(1) Our framework estimates n, the number of steps used for the reverse process of DDPM-based diffusion model. Since Randomized Smoothing (Cohen et al., 2019) adds Gaussian noise ϵ, where ϵ ∼ N (0, σ2I), to data input x to get the randomized data input, xrs = x+ ϵ, we map between the noise required by the randomized example xrs and the noise required by the diffused data xn (i.e., xn ∼ N (xn; √ αnx0, (1 − αn)I)) with n step diffusion processing so that αn = 11+σ2 . In this way, we can compute the corresponding timestep n, where n = argmins{|αs − 11+σ2 | | s ∈ [N ]}. (2). Given the above calculated timestep n, we scale xrs with √ αn to obtain the scaled randomized
smoothing sample √ αnxrs. Then we feed √ αnxrs into the reverse process of the diffusion model by K-times to get the reversed sample set {x̂10, x̂20, · · · , x̂i0, · · · , x̂K0 }. (3). We feed the obtained reversed sample set into a standard off-the-shelf classifier f to get the corresponding predicted labels {f(x̂10), f(x̂20), . . . , f(x̂i0), . . . , f(x̂K0 )}, and apply majority vote, denoted MV(· · ·), on these predicted labels to get the final label for xrs. Fast Sampling. To calculate the reversed sample, the standard reverse process of DDPM-based models require repeatedly applying a “single-step” operation n times to get the reversed sample x̂0 (i.e., x̂0 = Reverse(· · ·Reverse(· · ·Reverse(Reverse( √ αnxrs;n);n − 1); · · · ; i); · · · 1)). Here x̂i−1 = Reverse(x̂i; i) is equivalent to sample x̂i−1 from N (x̂i−1;µθ(x̂i, i),Σθ(x̂i, i)), where µθ(x̂i, i) =
1√ 1−βi ( x̂i − βi√1−αi ϵθ(x̂i, i) ) and Σθ := exp(v log βi + (1− v) log β̃i). Here v is a
parameter learned by DDPM and β̃i = 1−αi−1 1−αi .
To reduce the time complexity, we use the uniform sub-sampling strategy from Nichol & Dhariwal (2021). We uniformly sample a subsequence with size b from the original N -step the reverse process. Note that Carlini et al. (2022) set b = 1 for the “one-shot” sampling, in this way, x̂0 =
1√ αn
(xn − √ 1− αnϵθ( √ αnxrs, n)) is a deterministic value so that the reverse process does
not obtain a posterior data distribution conditioned on the input. Instead, we can tune the number of the sub-sampled DDPM steps to be larger than one (b > 1) to sample from a posterior data distribution conditioned on the input. The details about the fast sampling are shown in appendix C.2.
5 EXPERIMENTS
In this section, we use DensePure to evaluate certified robustness on two standard datasets, CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009).
Experimental settings We follow the experimental setting from Carlini et al. (2022). Specifically, for CIFAR-10, we use the 50-M unconditional improved diffusion model from Nichol & Dhariwal (2021) as the diffusion model. We select ViT-B/16 model Dosovitskiy et al. (2020) pretrained on ImageNet-21k and finetuned on CIFAR-10 as the classifier, which could achieve 97.9% accuracy on CIFAR-10. For ImageNet, we use the unconditional 256×256 guided diffusion model from Dhariwal & Nichol (2021) as the diffusion model and pretrained BEiT large model (Bao et al., 2021) trained on ImageNet-21k as the classifier, which could achieve 88.6% top-1 accuracy on validation set of ImageNet-1k. We select three different noise levels σ ∈ {0.25, 0.5, 1.0} for certification. For the parameters of DensePure , we set K = 40 and b = 10 except the results in ablation study. The details about the baselines are in the appendix.
5.1 MAIN RESULTS
We perform DensePure on the subset of CIFAR-10 or ImageNet. We choose the same subset as in Cohen et al. (2019), 500 samples for CIFAR-10 and 100 samples for ImageNet ( the results with 500 samples are shown in the appendix D.10). The results are shown in Table 1. For CIFAR10, comparing with the models which are carefully trained with randomized smoothing techniques in an end-to-end manner (i.e., w/o off-the-shelf classifier), we observe that our method with the standard off-the-shelf classifier outperforms them at smaller ϵ = {0.25, 0.5} on both CIFAR-10 and ImageNet datasets while achieves comparable performance at larger ϵ = {0.75, 1.0}. Comparing with the non-diffusion model based methods with off-the-shelf classifier (i.e., Denoised (Salman et al., 2020) and Lee (Lee, 2021)), both our method and Carlini et al. (2022) are significantly better
than them. These results verify the non-trivial adversarial robustness improvements introduced from the diffusion model. For ImageNet, our method is consistently better than all priors with a large margin.
Since both Carlini et al. (2022) and DensePure use the diffusion model, to better understand the importance of our design, that approximates the label of the high density region in the conditional distribution, we compare DensePure with Carlini et al. (2022) in a more fine-grained manner.
We show detailed certified robustness of the model among different σ at different radius for CIFAR10 in Figure 3-left and for ImageNet in Figure 3-right. We also present our results of certified accuracy at different ϵ in Appendix D.3. From these results, we find that our method is still consistently better at most ϵ (except ϵ = 0) among different σ. The performance margin between ours and Carlini et al. (2022) will become even larger with a large ϵ. These results further indicate that although the diffusion model improves model robustness, leveraging the posterior data distribution conditioned on the input instance (like DensePure ) via reverse process instead of using single sample ((Carlini et al., 2022)) is the key for better robustness. Additionally, we use the off-the-shelf classifiers, which are the VIT-based architectures trained a larger dataset. In the later ablation study section, we select the CNN-based architecture wide-ResNet trained on standard dataset from scratch. Our method still achieves non-trivial robustness. Further, our experiments in Appendix D.7 shows that removing the diffusion model from DensePure deteriorates the performance. It further verifies that our design is non-trivial.
5.2 ABLATION STUDY
Voting samples (K) We first show how K affects the certified accuracy. For efficiency, we select b = 10. We conduct experiments for both datasets. We show the certified accuracy among different r at σ = 0.25 in Figure 4. The results for σ = 0.5, 1.0 and CIFAR-10 are shown in the Appendix D.4. Comparing with the baseline (Carlini et al., 2022), we find that a larger majority vote number leads to a better certified accuracy. It verifies that DensePure indeed benefits the adversarial robustness and making a good approximation of the label with high density region requires a large number of voting samples. We find that our certified accuracy will almost converge at r = 40. Thus, we set r = 40 for our experiments. The results with other σ show the similar tendency. To further improve the time efficiency, we can use K-Consensus (Horváth et al., 2021). It accelerates the majority vote process by 45% ∼ 60% with a negligible performance drop. The experimental details and results are in Appendix D.8.
Fast sampling steps (b) To investigate the role of b, we conduct additional experiments with b ∈ {2, 5} at σ = 0.25. The results on ImageNet are shown in Figure 4 and results for σ = 0.5, 1.0 and CIFAR-10 are shown in the Appendix D.5. By observing results with majority vote, we find that a larger b can lead to a better certified accuracy since a larger b generates images with higher quality. By observing results without majority vote, the results show opposite conclusions where a larger b leads to a lower certified accuracy, which contradicts to our intuition. We guess the potential reason is that though more sampling steps can normally lead to better image recovery quality, it also brings more randomness, increasing the probability that the reversed image locates into a data region with the wrong label. These results further verify that majority vote is necessary for a better performance.
Different architectures One advantage of DensePure is to use the off-the-shelf classifier so that it can plug in any classifier. We choose Convolutional neural network (CNN)-based architectures: Wide-ResNet28-10 (Zagoruyko & Komodakis, 2016) for CIFAR-10 with 95.1% accuracy and WideResNet50-2 for ImageNet with 81.5% top-1 accuracy, at σ = 0.25. The results are shown in Table 2 and Figure E in Appendix D.6. Results for more model architectures and σ of ImageNet are also shown in Appendix D.6. We show that our method can enhance the certified robustness of any given classifier trained on the original data distribution. Noticeably, although the performance of CNNbased classifier is lower than Transformer-based classifier, DensePure with CNN-based model as the classifier can outperform Carlini et al. (2022) with ViT-based model as the classifier (except ϵ = 0 for CIFAR-10).
6 RELATED WORK
Using an off-the-shelf generative model to purify adversarial perturbations has become an important direction in adversarial defense. Previous works have developed various purification methods based on different generative models, such as GANs (Samangouei et al., 2018), autoregressive generative models (Song et al., 2018), and energy-based models (Du & Mordatch, 2019; Grathwohl et al., 2020; Hill et al., 2021). More recently, as diffusion models (or score-based models) achieve better generation quality than other generative models (Ho et al., 2020; Dhariwal & Nichol, 2021), many works consider using diffusion models for adversarial purification (Nie et al., 2022; Wu et al., 2022; Sun et al., 2022) Although they have found good empirical results in defending against existing adversarial attacks (Nie et al., 2022), there is no provable guarantee about the robustness about such methods. On the other hand, certified defenses provide guarantees of robustness (Mirman et al., 2018; Cohen et al., 2019; Lecuyer et al., 2019; Salman et al., 2020; Horváth et al., 2021; Zhang et al., 2018; Raghunathan et al., 2018a;b; Salman et al., 2019b; Wang et al., 2021). They provide a lower bounder of model accuracy under constrained perturbations. Among them, approaches Lecuyer et al. (2019); Cohen et al. (2019); Salman et al. (2019a); Jeong & Shin (2020); Zhai et al. (2020); Horváth et al. (2021); Jeong et al. (2021); Salman et al. (2020); Lee (2021); Carlini et al. (2022) based on randomized smoothing (Cohen et al., 2019) show the great scalability and achieve promising performance on large network and dataset. The most similar work to us is Carlini et al. (2022), which uses diffusion models combined with standard classifiers for certified defense. They view diffusion model as blackbox without having a theoretical under- standing of why and how the diffusion models contribute to such nontrivial certified robustness.
7 CONCLUSION
In this work, we theoretically prove that the diffusion model could purify adversarial examples back to the corresponding clean sample with high probability, as long as the data density of the corresponding clean samples is high enough. Our theoretical analysis characterizes the conditional distribution of the reversed samples given the adversarial input, generated by the diffusion model reverse process. Using the highest density point in the conditional distribution as the deterministic reversed sample, we identify the robust region of a given instance under the diffusion model reverse process, which is potentially much larger than previous methods. Our analysis inspires us to propose an effective pipeline DensePure, for adversarial robustness. We conduct comprehensive experiments to show the effectiveness of DensePure by evaluating the certified robustness via the randomized smoothing algorithm. Note that DensePure is an off-the-shelf pipeline that does not require training a smooth classifier. Our results show that DensePure achieves the new SOTA certified robustness for perturbation with L2-norm. We hope that our work sheds light on an in-depth understanding of the diffusion model for adversarial robustness.
Limitations. The time complexity of DensePure is high since it requires repeating the reverse process multiple times. In this paper, we use fast sampling to reduce the time complexity and show that the setting (b = 2 and K = 10) can achieve nontrivial certified accuracy. We leave the more advanced fast sampling strategy as the future direction.
ETHICS STATEMENT
Our work can positively impact the society by improving the robustness and security of AI systems. We have not involved human subjects or data set releases; instead, we carefully follow the provided licenses of existing data and models for developing and evaluating our method.
8 ACKNOWLEDGMENT
We thank the support of NSF grant No.1910100, NSF CNS 2046726, C3 AI and DHS under grant No. 17STQAC00001-06-00, DARPA under grant N66001-15-C-4066, the Center for Long-Term Cybersecurity, and Berkeley Deep Drive. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors, and do not necessarily reflect the views of the sponsors.
REPRODUCIBILITY STATEMENT
For theoretical analysis, all necessary assumptions are listed in B.1 and the complete proofs are included in B.2. The experimental setting and datasets are provided in section 5. The pseudo-code for DensePure is in C.1 and the fast sampling procedures are provided in C.2.
APPENDIX
Here is the appendix.
A NOTATIONS
p data distribution
P(A) probability of event A Ck set of functions with continuous k-th derivatives w(t) standard Wiener Process
w(t) reverse-time standard Wiener Process
h(x, t) drift coefficient in SDE
g(t) diffusion coefficient in SDE
αt scaling coefficient at time t
σ2t variance of added Gaussian noise at time t
{xt}t∈[0,1] diffusion process generated by SDE {x̂t}t∈[0,1] reverse process generated by reverse-SDE pt distribution of xt and x̂t {x1,x2, . . . ,xN} diffusion process generated by DDPM {βi}Ni=1 pre-defined noise scales in DDPM ϵa adversarial attack
xa adversarial sample
xa,t scaled adversarial sample
f(·) classifier g(·) smoothed classifier P (x̂0 = x|x̂t = xa,t) density of conditional distribution generated by reverseSDE based on xa,t P(xa; t) purification model with highest density point G(x0) data region with the same label as x0 DfP(G(x0); t) robust region for G(x0) associated with base classifier f and purification model P rfP(x0; t) robust radius for the point associated with base classifier f and purification model P Dsub(x0; t) convex robust sub-region sθ(x, t) score function {xθt }t∈[0,1] reverse process generated by score-based diffusion model P ( xθ0 = x|xθt = xa,t
) density of conditional distribution generated by scorebased diffusion model based on xa,t
λ(τ) weighting scheme of training loss for score-based diffusion model
JSM(θ, t;λ(·)) truncated training loss for score-based diffusion model µt,νt path measure for {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively
B MORE DETAILS ABOUT THEORETICAL ANALYSIS
B.1 ASSUMPTIONS
(i) The data distribution p ∈ C2 and Ex∼p[||x||22] <∞.
(ii) ∀t ∈ [0, T ] : h(·, t) ∈ C1,∃C > 0,∀x ∈ Rn, t ∈ [0, T ] : ||h(x, t)||2 ⩽ C (1 + ||x||2). (iii) ∃C > 0,∀x,y ∈ Rn : ||h(x, t)− h(y, t)||2 ⩽ C∥x− y∥2. (iv) g ∈ C and ∀t ∈ [0, T ], |g(t)| > 0. (v) ∀t ∈ [0, T ] : sθ(·, t) ∈ C1,∃C > 0,∀x ∈ Rn, t ∈ [0, T ] : ||sθ(x, t)||2 ⩽ C (1 + ||x||2).
(vi) ∃C > 0,∀x,y ∈ Rn : ||sθ(x, t)− sθ(y, t)||2 ⩽ C∥x− y∥2.
B.2 THEOREMS AND PROOFS
Theorem 3.1. Under conditions B.1, solving equation reverse-SDE starting from time t and point xa,t = √ αtxa will generate a reversed random variable x̂0 with conditional distribution
P (x̂0 = x|x̂t = xa,t) ∝ p(x) · 1√
(2πσ2t ) n e
−||x−xa||22 2σ2t
where σ2t = 1−αt αt
is the variance of the Gaussian noise added at timestamp t in the diffusion process SDE.
Proof. Under the assumption, we know {xt}t∈[0,1] and {x̂t}t∈[0,1] follow the same distribution, which means
P (x̂0 = x|x̂t = xa,t) = P(x̂0 = x, x̂t = xa,t)
P(x̂t = xa,t)
= P(x0 = x,xt = xa,t)
P(xt = xa,t)
= P (x0 = x) P(xt = xa,t|x0 = x)
P(xt = xa,t)
∝ P (x0 = x) 1√
(2πσ2t ) n e
−||x−xa||22 2σ2t
= p(x) · 1√ (2πσ2t ) n e −||x−xa||22 2σ2t
where the third equation is due to the chain rule of probability and the last equation is a result of the diffusion process.
Theorem 3.3. Under conditions B.1 and classifier f , let x0 be the sample with ground-truth label and xa be the adversarial sample, then (i) the purified sample P(xa; t) will have the ground-truth label if xa falls into the following convex set, Dsub (x0; t) := ⋂
{x′0:f(x′0 )̸=f(x0)}
{ xa : (xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + ||x′0 − x0||22
2
} ,
and further, (ii) the purified sample P(xa; t) will have the ground-truth label if and only if xa falls into the following set, D (G(x0); t) := ⋃ x̃0:f(x̃0)=f(x0)
Dsub (x̃0; t). In other words, D (G(x0); t) is the robust region for data region G(x0) under P(·; t) and f .
Proof. We start with part (i).
The main idea is to prove that a point x′0 such that f(x ′ 0) ̸= f(x0) should have lower density than x0 in the conditional distribution in Theorem 3.1 so that P(xa; t) cannot be x′0. In other words, we should have
P (x̂0 = x0|x̂t = xa,t) > P (x̂0 = x′0 | x̂t = xa,t) .
By Theorem 3.1, this is equivalent to
p(x0) · 1√
(2πσ2t ) n e
−||x0−xa|| 2 2 2σ2t > p(x′0) · 1√
(2πσ2t ) n e
−||x′0−xa|| 2 2
2σ2t
⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( ||x0 − xa||22 − ||x′0 − xa||22 ) ⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( ||x0 − xa||22 − ||x′0 − x0 + x0 − xa||22 ) ⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( 2(xa − x0)⊤(x′0 − x0)− ∥x′0 − x0∥22 ) .
Re-organizing the above inequality, we obtain
(xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + 1
2 ||x′0 − x0||22.
Note that the order of xa is at most one in every term of the above inequality, so the inequality actually defines a half-space in Rn for every (x0,x′0) pair. Further, we have to satisfy the inequality for every x′0 such that f(x ′ 0) ̸= f(x0), therefore, by intersecting over all such half-spaces, we obtain a convex Dsub (x0; t). Then we prove part (ii).
On the one hand, if xa ∈ D (G(x0); t), then there exists one x̃0 such that f(x̃0) = f(x0) and xa ∈ Dsub (x̃0; t). By part (i), x̃0 has higher probability than all other points with different labels from x0 in the conditional distribution P (x̂0 = x|x̂t = xa,t) characterized by Theorem 3.1. Therefore, P(xa; t) should have the same label as x0. On the other hand, if xa /∈ D (G(x0); t), then there is a point x̃1 with different label from x0 such that for any x̃0 with the same label as x0, P (x̂0 = x̃1|x̂t = xa,t) > P (x̂0 = x̃0|x̂t = xa,t). In other words, P(xa; t) would have different label from x0.
Theorem 3.4. Under score-based diffusion model Song et al. (2021b) and conditions B.1, we can bound
DKL(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) = JSM(θ, t;λ(·))
where {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] are stochastic processes generated by reverse-SDE and scorebased diffusion model respectively,
JSM(θ, t;λ(·)) := 1
2 ∫ t 0 Epτ (x) [ λ(τ) ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ,
sθ(x, τ) is the score function to approximate∇x log pτ (x), and λ : R→ R is any weighting scheme used in the training score-based diffusion models.
Proof. Similar to proof of (Song et al., 2021a, Theorem 1), let µt and νt be the path measure for reverse processes {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively based on the scaled adversarial sample xa,t. Under conditions B.1, the KL-divergence can be computed via the Girsanov theorem Oksendal
(2013): DKL ( P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t) ) = − Eµt [ log
dνt dµt ] (i) = Eµt [∫ t 0 g(τ) (∇x log pτ (x)− sθ(x, τ)) dwτ + 1 2 ∫ t 0 g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 dτ ]
= Eµt [ 1
2 ∫ t 0 g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 dτ ]
= 1
2 ∫ τ 0 Epτ (x) [ g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ
= JSM ( θ, t; g(·)2 ) where (i) is due to Girsanov Theorem and (ii) is due to the martingale property of Itô integrals.
C MORE DETAILS ABOUT DENSEPURE
C.1 PSEUDO-CODE
We provide the pseudo code of DensePure in Algo. 1 and Alg. 2
Algorithm 1 DensePure pseudo-code with the highest density point 1: Initialization: choose off-the-shelf diffusion model and classifier f , choose ψ = t, 2: Input sample xa = x0 + ϵa 3: Compute x̂0 = P(xa;ψ) 4: ŷ = f(x̂0)
Algorithm 2 DensePure pseudo-code with majority vote 1: Initialization: choose off-the-shelf diffusion model and classifier f , choose σ 2: Compute αn = 11+σ2 , n = argmins {∣∣∣αs − 11+σ2 ∣∣∣ | s ∈ {1, 2, · · · , N}} 3: Generate input sample xrs = x0 + ϵ, ϵ ∼ N (0, σ2I) 4: Choose schedule Sb, get x̂i0 ← rev( √ αnxrs)i, i = 1, 2, . . . ,K with Fast Sampling
5: ŷ = MV({f(x̂10), . . . , f(x̂K0 )}) = argmaxc ∑K i=1 1{f(x̂i0) = c}
C.2 DETAILS ABOUT FAST SAMPLING
Applying single-step operation n times is a time-consuming process. In order to reduce the time complexity, we follow the method used in (Nichol & Dhariwal, 2021) and sample a subsequence Sb with b values (i.e., Sb = {n, ⌊n− n
b ⌋, · · · , 1}︸ ︷︷ ︸
b
, where Sbj is the j-th element in S b and Sbj =
⌊n − jnb ⌋,∀j < b and S b b = 1) from the original schedule S (i.e., S = {n, n− 1, · · · , 1}︸ ︷︷ ︸
n
, where
Sj = j is the j-th element in S).
Within this context, we adapt the original α schedule αS = {α1, · · · , αi, · · · , αn} used for singlestep to the new schedule αS b
= {αSb1 , · · · , αSbj , · · · , αSbb} (i.e., α Sb i = αSbi = αS⌊n− inb ⌋ is the
i-th element in αS b ). We calculate the corresponding βS b = {βSb1 , βS b 2 , · · · , βS b i , · · · , βS b b } and β̃S b
= {β̃Sb1 , β̃S b 2 , · · · , β̃S b i , · · · , β̃S b b } schedules, where βSbi = β Sb i = 1 − αS
b
i
αS b i−1 , β̃Sbi =
β̃S b i = 1−αS
b
i−1
1−αSbi βSbi . With these new schedules, we can use b times reverse steps to calculate
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 88.0 73.8 56.2 41.6 0.0 Carlini (Carlini et al., 2022) σ = 0.5 74.2 62.0 50.4 40.2 31.0
σ = 1.0 49.4 41.4 34.2 27.8 21.8
σ = 0.25 87.6(-0.4) 76.6(+2.8) 64.6(+8.4) 50.4(+8.8) 0.0(+0.0) Ours σ = 0.5 73.6(-0.6) 65.4(+3.4) 55.6(+5.2) 46.0(+5.8) 37.4(+6.4)
σ = 1.0 55.0(+5.6) 47.8(+6.4) 40.8(+6.6) 33.0(+5.2) 28.2(+6.4)
Table A: Certified accuracy compared with Carlini et al. (2022) for CIFAR-10 at all σ. The numbers in the bracket are the difference of certified accuracy between two methods. Our diffusion model and classifier are the same as Carlini et al. (2022).
x̂0 = Reverse(· · ·Reverse(Reverse(xn;Sbb);Sbb−1); · · · ; 1)︸ ︷︷ ︸ b . Since Σθ(xSbi , S b i ) is parameterized as a range between βS b and β̃S b
, it will automatically be rescaled. Thus, x̂Sbi−1 = Reverse(x̂Sbi ;S b i )
is equivalent to sample xSbi−1 from N (xSbi−1 ;µθ(xSbi , S b i ),Σθ(xSbi , S b i )).
D MORE EXPERIMENTAL DETAILS AND RESULTS
D.1 IMPLEMENTATION DETAILS
We select three different noise levels σ ∈ {0.25, 0.5, 1.0} for certification. For the parameters of DensePure , The sampling numbers when computing the certified radius are n = 100, 000 for CIFAR-10 and n = 10, 000 for ImageNet. We evaluate the certified robustness on 500 samples subset of CIFAR-10 testset and 100 samples subset of ImageNet validation set. we set K = 40 and b = 10 except the results in ablation study.
D.2 BASELINES.
We select randomized smoothing based methods including PixelDP (Lecuyer et al., 2019), RS (Cohen et al., 2019), SmoothAdv (Salman et al., 2019a), Consistency (Jeong & Shin, 2020), MACER (Zhai et al., 2020), Boosting (Horváth et al., 2021) , SmoothMix (Jeong et al., 2021), Denoised (Salman et al., 2020), Lee (Lee, 2021), Carlini (Carlini et al., 2022) as our baselines. Among them, PixelDP, RS, SmoothAdv, Consistency, MACER, and SmoothMix require training a smooth classifier for a better certification performance while the others do not. Salman et al. and Lee use the off-the-shelf classifier but without using the diffusion model. The most similar one compared with us is Carlini et al., which also uses both the off-the-shelf diffusion model and classifier. The above two settings mainly refer to Carlini et al. (2022), which makes us easier to compared with their results.
D.3 MAIN RESULTS FOR CERTIFIED ACCURACY
We compare with Carlini et al. (2022) in a more fine-grained version. We provide results of certified accuracy at different ϵ in Table A for CIFAR-10 and Table B for ImageNet. We include the accuracy difference between ours and Carlini et al. (2022) in the bracket in Tables. We can observe from the tables that the certified accuracy of our method outperforms Carlini et al. (2022) except ϵ = 0 at σ = 0.25, 0.5 for CIFAR-10.
D.4 EXPERIMENTS FOR VOTING SAMPLES
Here we provide more experiments with σ ∈ {0.5, 1.0} and b = 10 for different voting samplesK in Figure A and Figure B. The results for CIFAR-10 is in Figure G. We can draw the same conclusion mentioned in the main context .
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 77.0 71.0 0.0 0.0 0.0 0.0 Carlini (Carlini et al., 2022) σ = 0.5 74.0 67.0 54.0 46.0 0.0 0.0
σ = 1.0 59.0 53.0 49.0 38.0 29.0 22.0
σ = 0.25 80.0(+3.0) 76.0(+5.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) Ours σ = 0.5 75.0(+1.0) 72.0(+5.0) 62.0(+8.0) 49.0(+3.0) 0.0(+0.0) 0.0(+0.0)
σ = 1.0 61.0(+2.0) 57.0(+4.0) 53.0(+4.0) 49.0(+11.0) 37.0(+8.0) 26.0(+4.0)
Table B: Certified accuracy compared with Carlini et al. (2022) for ImageNet at all σ. The numbers in the bracket are the difference of certified accuracy between two methods. Our diffusion model and classifier are the same as Carlini et al. (2022).
CIFAR=10 ImageNet
Figure A: Certified accuracy among different vote numbers with different radius. Each line in the figure represents the certified accuracy among different vote numbers K with Gaussian noise σ = 0.50.
D.5 EXPERIMENTS FOR FAST SAMPLING STEPS
We also implement additional experiments with b ∈ {1, 2, 10} at σ = 0.5, 1.0. The results are shown in Figure C and Figure D. The results for CIFAR-10 are in Figure G. We draw the same conclusion as mentioned in the main context.
D.6 EXPERIMENTS FOR DIFFERENT ARCHITECTURES
We try different model architectures of ImageNet including Wide ResNet-50-2 and ResNet 152 with b = 2 andK = 10. The results are shown in Figure F. We find that our method outperforms (Carlini et al., 2022) for all σ among different classifiers.
D.7 EXPERIMENTS FOR RANDOMIZED SMOOTHING WITHOUT DIFFUSION MODEL
To show the effectiveness of our diffusion model design, we remove the diffusion model from our pipeline and conduct experiments. Specifically, first, we remove the diffusion model and perform randomized smoothing only on the pretrained classifier that we used in DensePure (i.e., ViT-B/16 for CIFAR-10 and BEiT for ImageNet). The results are shown in Table C and Table D. The number in the bracket is the robust accuracy of pretrained classifier - the robust accuracy of DensePure. From the result, we conclude that without the help of diffusion models, neither ViT nor BEiT could reach high certified accuracy.
Second, we conduct additional experiments to fairly compare with randomized smoothing without diffusion models under majority vote settings. Specifically, we activate droppath in BEiT at the inference stage to support majority votes. The other settings are the same as DensePure. The results are shown in Table E. The number in the bracket is calculated by the robust accuracy of BeiT with majority votes - the robust accuracy of DensePure. We find that simply performing majority votes on the BeiT classifier will not result in higher certified robustness.
CIFAR=10 ImageNet
Figure B: Certified accuracy among different vote numbers with different radius. Each line in the figure represents the certified accuracy among different vote numbers K with Gaussian noise σ = 1.00.
CIFAR=10 ImageNet
Figure C: Certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 0.50.
Third, to compare with randomized smoothing without diffusion model, we also evaluate certified accuracy with Gaussian augmentation-trained ViT models on CIFAR-10. The results shown in the table F prove that DensePure can still achieve higher certified accuracy than randomized smoothing on even Gaussian augmented models without diffusion models. The numbers in the bracket are the difference between the robust accuracy of Gaussian augmentation randomized smoothing and DensePure.
D.8 EXPERIMENTS FOR K-CONSENSUS AGGREGATION
To improve the efficient of our algorithm, we try the K-consensus Aggregation, where an early stop will be triggered if the classification results of the K consecutive reversed samples are the same. Here we calculate the certified robustness for 100 subsamples of CIFAR-10 and ImageNet with 2 sampling steps, a maximum 10 majority votes and consensus threshold k=3. Results are shown in Table G and Table H. The column of ”Avg MV” in the tables means the average of the actual number of majority votes required for our algorithm. For instance, if the predicted labels of the first 3 reversed samples are the same, the actual majority vote numbers will be 3. The numbers in the bracket are the difference between certified accuracy w/o K-Consensus Aggregation.
D.9 EXPERIMENTS FOR CERTIFIED ACCURACY WITH LESS SAMPLING STEPS AND VOTE NUMBERS
We also conduct additional experiments with 2 sampling steps and 5 majority votes. The results are shown in Table I. We find that our method still achieves better results than the existing method.
CIFAR=10 ImageNet
Figure D: Certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 1.00.
CIFAR=10 ImageNet
Figure E: Certified accuracy with different architectures. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 0.25.
D.10 EXPERIMENTS FOR DENSEPURE 500 TEST SAMPLING NUMBER RESULTS ON IMAGENET
We increase the ImageNet test sampling number from 100 to 500 and update the experiment results in Table J and Table K. We can draw the similar conclusion.
Wide ResNet-50-2 ResNet152 Figure F: Certified accuracy of ImageNet for different architectures. The lines represent the certified accuracy with different L2 perturbation bound with different Gaussian noise σ ∈ {0.25, 0.50, 1.00}.
ImageNet ImageNet
Figure G: Ablation study. The left image shows the certified accuracy among different vote numbers with different radius ϵ ∈ {0.0, 0.25, 0.5, 0.75}. Each line in the figure represents the certified accuracy of our method among different vote numbers K with Gaussian noise σ = 0.25. The right image shows the certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound.
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 20.8(-66.8) 7.4(-69.2) 1.8(-62.8) 0.2(-50.2) 0.0(+0.0) σ = 0.5 11.6(-62.0) 6.6(-58.8) 3.8(-51.8) 1.2(-44.8) 0.2(-37.2) σ = 1.0 10.6(-44.4) 10.6(-37.4) 9.4(-31.4) 9.4(-23.6) 9.4(-18.8)
Table C: Certified accuracy of randomized smoothing on pretrained classifier ViT-B/16 at all σ for CIFAR-10
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 73.2(-10.8) 55.8(-22.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) σ = 0.5 7.8(-72.4) 4.6(-71.0) 3.2(-63.8) 1.0(-53.6) 0.0(+0.0) 0.0(+0.0) σ = 1.0 0.0(-67.8) 0.0(-61.4) 0.0(-55.6) 0.0(-50.0) 0.0(-42.2) 0.0(-25.8)
Table D: Certified accuracy of randomized smoothing on pretrained classifier BEiT at all σ for ImageNet
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 73.8(-10.2) 58.0(-19.8) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) σ = 0.5 9.0(-71.2) 7.0(-68.6) 4.0(-63.0) 2.0(-52.6) 0.0(+0.0) 0.0(+0.0) σ = 1.0 0.0(-67.8) 0.0(-61.4) 0.0(-55.6) 0.0(-50.0) 0.0(-42.2) 0.0(-25.8)
Table E: Certified accuracy of randomized smoothing on droppatch activated BEiT with 10 majority votes at all σ for ImageNet
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 88.2(+0.6) 71.4(-5.2) 53.2(-11.4) 35.2(-15.2) 0.0(+0.0) σ = 0.5 69.8(-3.8) 60.0(-5.4) 48.4(-7.2) 37.2(-8.8) 27.2(-10.2) σ = 1.0 49.0(-6.0) 41.8(-6.0) 34.0(-6.8) 27.0(-6.0) 22.0(-6.2)
Table F: Certified accuracy of randomized smoothing on Gaussian augmentation-trained ViT at all σ on CIFAR-10
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0 Avg MV
σ = 0.25 92(+0.0) 77(+0.0) 60(+0.0) 48(-1.0) 0(+0.0) 3.84 σ = 0.5 74(+0.0) 65(+0.0) 53(-1.0) 45(+0.0) 40(+0.0) 4.43 σ = 1.0 53(+0.0) 46(+0.0) 42(+0.0) 31(+0.0) 25(+0.0) 5.49
Table G: Certified accuracy and average majority votes with 2 sample steps and k = 3 consensus threshold at all σ for CIFAR-10.
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0 Avg MV
σ = 0.25 78(+0.0) 74(+0.0) 0(+0.0) 0(+0.0) 0(+0.0) 0(+0.0) 3.34 σ = 0.5 75(+0.0) 69(+0.0) 61(+0.0) 47(+0.0) 0(+0.0) 0(+0.0) 3.89 σ = 1.0 60(+0.0) 54(+0.0) 50(+0.0) 41(+0.0) 32(+0.0) 23(+0.0) 5.23
Table H: Certified accuracy and average majority votes with 2 sample steps and k = 3 consensus threshold at all σ for ImageNet.
Certified Accuracy at ϵ(%) CIFAR-10 ImageNet
Method Off-the-shelf 0.25 0.5 0.75 1.0 0.5 1.0 1.5 2.0 3.0
PixelDP (Lecuyer et al., 2019) ✗ (71.0)22.0 (44.0)2.0 - - (33.0)16.0 - - - - RS (Cohen et al., 2019) ✗ (75.0)61.0 (75.0)43.0 (65.0)32.0 (65.0)23.0 (67.0)49.0 (57.0)37.0 (57.0)29.0 (44.0)19.0 (44.0)12.0 SmoothAdv (Salman et al., 2019a) ✗ (82.0)68.0 (76.0)54.0 (68.0)41.0 (64.0)32.0 (63.0)54.0 (56.0)42.0 (56.0)34.0 (41.0)26.0 (41.0)18.0 Consistency (Jeong & Shin, 2020) ✗ (77.8)68.8 (75.8)58.1 (72.9)48.5 (52.3)37.8 (55.0)50.0 (55.0)44.0 (55.0)34.0 (41.0)24.0 (41.0)17.0 MACER (Zhai et al., 2020) ✗ (81.0)71.0 (81.0)59.0 (66.0)46.0 (66.0)38.0 (68.0)57.0 (64.0)43.0 (64.0)31.0 (48.0)25.0 (48.0)14.0 Boosting (Horváth et al., 2021) ✗ (83.4)70.6 (76.8)60.4 (71.6)52.4 (73.0)38.8 (65.6)57.0 (57.0)44.6 (57.0)38.4 (44.6)28.6 (38.6)21.2 SmoothMix (Jeong et al., 2021) ✓ (77.1)67.9 (77.1)57.9 (74.2)47.7 (61.8)37.2 (55.0)50.0 (55.0)43.0 (55.0)38.0 (40.0)26.0 (40.0)17.0
Denoised (Salman et al., 2020) ✓ (72.0)56.0 (62.0)41.0 (62.0)28.0 (44.0)19.0 (60.0)33.0 (38.0)14.0 (38.0)6.0 - - Lee (Lee, 2021) ✓ 60.0 42.0 28.0 19.0 41.0 24.0 11.0 - - Carlini (Carlini et al., 2022) ✓ (88.0)73.8 (88.0)56.2 (88.0)41.6 (74.2)31.0 (82.0)74.0 (77.2.0)59.8 (77.2)47.0 (64.6)31.0 (64.6)19.0 Ours ✓ (87.6)76.6 (87.6)64.6 (87.6)50.4 (73.6)37.4 (84.0)77.8 (80.2)67.0 (80.2)54.6 (67.8)42.2 (67.8)25.8
Table J: Certified accuracy compared with existing works. The certified accuracy at ϵ = 0 for each model is in the parentheses. The certified accuracy for each cell is from the respective papers except Carlini et al. (2022). Our diffusion model and classifier are the same as Carlini et al. (2022), where the off-the-shelf classifier uses ViT-based architectures trained on a large dataset (ImageNet-22k).
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 82.0 74.0 0.0 0.0 0.0 0.0 Carlini (Carlini et al., 2022) σ = 0.5 77.2 71.8 59.8 47.0 0.0 0.0
σ = 1.0 64.6 57.8 49.2 40.6 31.0 19.0
σ = 0.25 84.0(+2.0) 77.8(+3.8) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) Ours σ = 0.5 80.2(+3.0) 75.6(+3.8) 67.0(+7.2) 54.6(+7 | 1. What is the focus of the paper regarding adversarial robustness?
2. What are the strengths of the proposed method, particularly in terms of theoretical analysis and modeling/algorithmic aspects?
3. What are the weaknesses of the paper, especially regarding the time complexity of the method and the claimed but seemingly unsupported contributions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the conditions under which diffusion model can work well for purification of adversarially perturbed samples. A simple diffusion-based purification method named DensePure is proposed by using majority vote and achieves higher certified accuracy in comparison with other method on CIFAR-10 and ImageNet.
Strengths And Weaknesses
Strengths:
S1) Theoretical strength: This paper theoretically analyzes why and how diffusion-based purification model can enhance the adversarial robustness of a given classifier for the first time. The robust region of a sample (under a deterministic inverse purification process and a base classifier) is characterized as the union of several convex sub-robust regions (indexed by samples with the same ground truth label). The newly characterized robust region may provide with a larger robust radius than other methods, which indicates better robustness.
S2) Modelling/algorithmic strength: The proposed DensePure performs consistently better than existing methods on ImageNet, with 7% improvement on average.
Weaknesses:
W1) Time complexity of DensePure: As it requires repeating the reverse process multiple times, DensePure is really time-consuming and this drawback may prevent it from further applications to large datasets.
W2) Claimed but seemingly not well-supported contributions: The authors claimed their first contributions as "We prove that under constrained data density property, an adversarial example can be recovered back to the original clean sample with high probability via the reverse process of a diffusion model". However, it seems not well-supported. The reason is as follows:
Let
x
0
be the clean sample and
x
a
be its adversarially perturbed sample. Theorem 3.1 characterizes the distribution of inversed variable
x
^
0
from a scaled adversary
x
a
,
t
=
α
t
x
a
as
P
(
x
^
0
=
x
|
x
^
t
=
x
a
,
t
)
∝
p
(
x
)
⋅
1
(
2
π
σ
t
2
)
n
exp
(
−
|
|
x
−
x
a
|
|
2
2
2
σ
t
2
)
Letting
x
=
x
0
, we obtain
P
(
x
^
0
=
x
0
|
x
^
t
=
x
a
,
t
)
∝
p
(
x
0
)
⋅
1
(
2
π
σ
t
2
)
n
exp
(
−
|
|
x
0
−
x
a
|
|
2
2
2
σ
t
2
)
which seems unable to directly imply "
P
(
x
^
0
=
x
0
|
x
^
t
=
x
a
,
t
)
is high" or "
P
(
|
|
x
^
0
−
x
0
|
|
≤
δ
|
x
^
t
=
x
a
,
t
)
is high with a small
δ
".
So, I failed to understand why "under constrained data density property, an adversarial example can be recovered back to the original clean sample with high probability". What is the "constrained data density property"? Can the authors provide upper bounds on
|
|
x
^
0
−
x
0
|
|
to verify "an adversarial example can be recovered back to the original clean sample with high probability"? Did I miss something?
----------------After rebuttal------------ In the rebuttal, both of my main concerns about "Weakness 1: Time complexity of DensePure" and "Weakness 2: Claimed but seemingly not well-supported contributions" have been well explained. Therefore, I decided to change my score from 6 to 8.
Clarity, Quality, Novelty And Reproducibility
Clarity score 7/10: In general, this paper is well-written. However, it still has typos. For example, "the robust region for data region with ground-truth label under
P
(
⋅
;
t
)
" should be "the robust region for data region with ground-truth label under
P
(
⋅
;
t
)
".
Quality score 6/10: The theoretical results as well as the proposed DensePure are technically sound. However, the first claimed contribution seems not well supported. (See Weakness (W2) for more details. If I misunderstood this paper, the authors please point it out directly.)
Novelty score 7/10: To the best of my knowledge, this paper theoretically analyzes why and how diffusion model performs well in purification by proposing three interesting and novel theorems.
Reproductivity score: 6/10: Although the code is not shared, I think the details in the supplementary materials are sufficient for reproduction. |
ICLR | Title
DensePure: Understanding Diffusion Models for Adversarial Robustness
Abstract
Diffusion models have been recently employed to improve certified robustness through the process of denoising. However, the theoretical understanding of why diffusion models are able to improve the certified robustness is still lacking, preventing from further improvement. In this study, we close this gap by analyzing the fundamental properties of diffusion models and establishing the conditions under which they can enhance certified robustness. This deeper understanding allows us to propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. classifier). Given an (adversarial) input, DensePure consists of multiple runs of denoising via the reverse process of the diffusion model (with different random seeds) to get multiple reversed samples, which are then passed through the classifier, followed by majority voting of inferred labels to make the final prediction. This design of using multiple runs of denoising is informed by our theoretical analysis of the conditional distribution of the reversed sample. Specifically, when the data density of a clean sample is high, its conditional density under the reverse process in a diffusion model is also high; thus sampling from the latter conditional distribution can purify the adversarial example and return the corresponding clean sample with a high probability. By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model’s reverse process. We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works. In practice, DensePure can approximate the label of the high density region in the conditional distribution so that it can enhance certified robustness. We conduct extensive experiments to demonstrate the effectiveness of DensePure by evaluating its certified robustness given a standard model via randomized smoothing. We show that DensePure is consistently better than existing methods on ImageNet, with 7% improvement on average. Project page:https://densepure.github.io/.
1 INTRODUCTION
Diffusion models have been shown to be a powerful image generation tool (Ho et al., 2020; Song et al., 2021b) owing to their iterative diffusion and denoising processes. These models have achieved state-of-the-art performance on sample quality (Dhariwal & Nichol, 2021; Vahdat et al., 2021) as well as effective mode coverage (Song et al., 2021a). A diffusion model usually consists of two processes: (i) a forward diffusion process that converts data to noise by gradually adding noise to the input, and (ii) a reverse generative process that starts from noise and generates data by denoising one step at a time (Song et al., 2021b).
Given the natural denoising property of diffusion models, empirical studies have leveraged them for adversarial purification (Nie et al., 2022; Wu et al., 2022; Carlini et al., 2022). For instance, Nie et al. (2022) employed diffusion models for model purification, DiffPure. They empirically show that by carefully choosing the amount of Gaussian noises added during the diffusion process, adversarial
∗the first four authors contributed equally
perturbations can be removed while preserving the true label semantics. Despite the significant empirical result, there is no provable guarantee of the achieved robustness. A concurrent work (Carlini et al., 2022) instantiated the randomized smoothing approach with the diffusion model to offer a provable guarantee of model robustness against L2-norm bounded adversarial example. However, they do not provide a theoretical understanding of why and how diffusion models contribute to such nontrivial certified robustness.
Our Approach. We are the first to theoretically analyze the fundamental properties of diffusion models to understand why and how diffusion models enhance certified robustness. This deeper understanding allows us to propose a new method DensePure to improve the certified robustness of any given classifier more effectively using diffusion models. An illustration of the DensePure framework is provided in Figure 1, where it consists of a pretrained diffusion model and a pretrained classifier. DensePure incorporates two steps: (i) using the reverse process of the diffusion model to obtain a sample of the posterior data distribution conditioned on the adversarial input; and (ii) repeating the reverse process multiple times with different random seeds to approximate the label of the high-density region in the conditional distribution via a simple majority vote strategy. In particular, given an adversarial input, we repeatedly feed it into the reverse process of the diffusion model to get multiple reversed examples and feed them into the classifier to calculate their labels. We then apply the majority vote on the set of labels to get the final predicted label.
DensePure is inspired by our theoretical analysis, where we show that the reverse process of the diffusion model provides a conditional distribution of the reversed sample given an adversarial input. Sampling from this conditional distribution can enhance the certified robustness. Specifically, we prove that when the data density of clean samples is high, it is a sufficient condition for the conditional density of the reversed samples to be also high. Therefore, in DensePure, samples from the conditional distribution can recover the ground-truth labels with a high probability.
For understanding and rigorous analysis conveniently, we use the highest density point in the conditional distribution as the deterministic reversed sample for the classifier prediction. We show that the robust region for a given sample under the diffusion model’s reverse process is the union of multiple convex sets, each surrounding a region around the ground-truth label. Compared with the robust region of previous work (Cohen et al., 2019), which only focuses on only one region with the ground-truth label, such the union of multiple convex sets has the potential to provide a much larger robust region, resulting in higher certified robustness. Moreover, the characterization implies that the size of robust regions is affected by the relative density and the distance between data regions with the ground-truth label and those with other labels.
We conduct extensive experiments on ImageNet and CIFAR-10 datasets under different settings to evaluate the certifiable robustness of DensePure. In particular, we follow the setting from Carlini et al. (2022) and rely on randomized smoothing to certify the robustness of the adversarial perturbations bounded in the L2-norm. We show that DensePure achieves a new state-of-the-art certified robustness on the standard pretrained model without further tuning any model parameters (e.g., smooth augmentation Cohen et al. (2019)). On ImageNet, it achieves a consistently higher certified accuracy, 7% improvement on average, than the existing methods among every σ at every radius ϵ .
Technical Contributions. In this paper, we take the first step to understand why and how diffusion models contribute to certified robustness. We make contributions on both theoretical and empirical fronts: (1)in theory, we prove that an adversarial example can be recovered back to the original clean sample with a high probability via the reverse process of a diffusion model. (2) In theory, we characterized the robust region for each point by further taking the highest density point in the conditional
distribution generated by the reverse process as the reversed sample. We show that the robust region for a given sample under the diffusion model’s reverse process has the potential to provide a larger robust region. To the best of our knowledge, this is the first work that characterizes the robust region of using the reverse process of the diffusion model for adversarial purification (3) In practice, we proposed DensePurebased on our theoretical analysis. We demonstrated DensePureis consistently better than existing methods on ImageNet, with 7% improvement on average.
2 PRELIMINARIES AND BACKGROUNDS
Continuous-Time Diffusion Model. The diffusion model has two components: the diffusion process followed by the reverse process. Given an input random variable x0 ∼ p, the diffusion process adds isotropic Gaussian noises to the data so that the diffused random variable at time t is xt = √ αt(x0 + ϵt), s.t., ϵt ∼ N (0, σ2t I), and σ2t = (1 − αt)/αt, and we denote xt ∼ pt. The forward diffusion process can also be defined by the stochastic differential equation
dx = h(x, t)dt+ g(t)dw, (SDE)
where x0 ∼ p, h : Rd × R 7→ Rd is the drift coefficient, g : R 7→ R is the diffusion coefficient, and w(t) ∈ Rn is the standard Wiener process. Under mild conditions B.1, the reverse process exists and removes the added noise by solving the reverse-time SDE (Anderson, 1982)
dx̂ = [h(x̂, t)− g(t)2▽x̂ log pt(x̂)]dt+ g(t)dw, (reverse-SDE)
where dt is an infinitesimal reverse time step, and w(t) is a reverse-time standard Wiener process.
In our context, we use the conventions of VP-SDE (Song et al., 2021b) where h(x; t) := − 12γ(t)x and g(t) := √ γ(t) with γ(t) positive and continuous over [0, 1], such that x(t) =
√ αtx(0) +√
1− αtϵ where αt = e− ∫ t 0 γ(s)ds and ϵ ∼ N (0, I). We use {xt}t∈[0,1] and {x̂t}t∈[0,1] to denote the diffusion process and the reverse process generated by SDE and reverse-SDE respectively, which follow the same distribution.
Discrete-Time Diffusion Model (or DDPM (Ho et al., 2020)). DDPM constructs a discrete Markov chain {x0,x1, · · · ,xi, · · · ,xN} as the forward process for the training data x0 ∼ p, such that P(xi|xi−1) = N (xi; √ 1− βixi−1, βiI), where 0 < β1 < β2 < · · · < βN < 1 are predefined
noise scales such that xN approximates the Gaussian white noise. Denote αi = ∏N
i=1(1− βi), we have P(xi|x0) = N (xi; √ αix0, (1− αi)I), i.e., xt(x0, ϵ) = √ αix0 + (1− αi)ϵ, ϵ ∼ N (0, I).
The reverse process of DDPM learns a reverse direction variational Markov chain pθ(xi−1|xi) = N (xi−1;µθ(xi, i),Σθ(xi, i)). Ho et al. (2020) defines ϵθ as a function approximator to predict ϵ from xi such that µθ(xi, i) = 1√1−βi ( xi − βi√1−αi ϵθ(xi, i) ) . Then the reverse time samples
are generated by x̂i−1 = 1√1−βi ( x̂i − βi√1−αi ϵθ∗(x̂i, i) ) + √ βiϵ, ϵ ∼ N (0, I), and the optimal
parameters θ∗ are obtained by solving θ∗ := argminθ Ex0,ϵ [ ||ϵ− ϵθ( √ αix0 + (1− αi), i)||22 ] .
Randomized Smoothing. Randomized smoothing is used to certify the robustness of a given classifier against L2-norm based perturbation. It transfers the classifier f to a smooth version g(x) = argmaxc Pϵ∼N (0,σ2I)(f(x + ϵ) = c), where g is the smooth classifier and σ is a hyperparameter of the smooth classifier g, which controls the trade-off between robustness and accuracy. Cohen et al. (2019) shows that g(x) induces the certifiable robustness for x under the L2-norm with radius R, where R = σ2 ( Φ−1(pA)− Φ−1(pB) ) ; pA and pB are probability of the most probable class and “runner-up” class respectively; Φ is the inverse of the standard Gaussian CDF. The pA and pB can be estimated with arbitrarily high confidence via Monte Carlo method (Cohen et al., 2019).
3 THEORETICAL ANALYSIS
In this section, we theoretically analyze why and how the diffusion model can enhance the robustness of a given classifier. We will analyze directly on SDE and reverse-SDE as they generate the same
stochastic processes {xt}t∈[0,T ] and the literature works establish an approximation on reverseSDE (Song et al., 2021b; Ho et al., 2020).
We first show that given a diffusion model, solving reverse-SDE will generate a conditional distribution based on the scaled adversarial sample, which will have high density on data region with high data density and near to the adversarial sample in Theorem 3.1. See detailed conditions in B.1. Theorem 3.1. Under conditions B.1, solving equation reverse-SDE starting from time t and sample xa,t = √ αtxa will generate a reversed random variable x̂0 with density P (x̂0 = x|x̂t = xa,t) ∝
p(x) · 1√ (2πσ2t ) n exp
( −||x−xa||22
2σ2t
) , where p is the data distribution, σ2t = 1−αt αt is the variance of
Gaussian noise added at time t in the diffusion process.
Proof. (sketch) Under conditions B.1, we know {xt}t∈[0,1] and {x̂t}t∈[0,1] follow the same distribution, and then the rest proof follows Bayes’ Rule.
Please see the full proofs of this and the following theorems in Appendix B.2. Remark 1. Note that P (x̂0 = x|x̂t = xa,t) > 0 if and only if p(x) > 0, thus the generated reverse sample will be on the data region where we train classifiers.
In Theorem 3.1, the conditional density P (x̂0 = x|x̂t = xa,t) is high if both p(x) and the Gaussian term have high values, i.e., x has high data density and is close to the adversarial sample xa. The latter condition is reasonable since adversarial perturbations are typically bounded due to budget constraints. So the above argument implies that a reversed sample will more likely to have the ground-truth label if data region with the ground-truth label has high data density. For the sake of theoretical analysis and understanding, we take the point with highest conditional density P (x̂0 = x|x̂t = xa,t) as the reversed sample, defined as P(xa; t) := argmaxx P (x̂0 = x|x̂t = xa,t). P(xa; t) is a representative of the high density data region in the conditional distribution and P(·; t) is a deterministic purification model. In the following, we characterize the robust region for data region with ground-truth label under P (·; t). The robust region and robust radius for a general deterministic purification model given a classifier are defined below. Definition 3.2 (Robust Region and Robust Radius). Given a classifier f and a point x0, let G(x0) := {x : f(x) = f(x0)} be the data region where samples have the same label as x0. Then given a deterministic purification model P(· ;ψ) with parameter ψ, we define the robust region of G(x0) under P and f as DfP (G(x0);ψ) := {x : f (P(x;ψ)) = f(x0)}, i.e., the set of x such that purified sample P(x;ψ) has the same label as x0 under f . Further, we define the robust radius of x0 as r f P(x0;ψ) := max { r : x0 + ru ∈ DfP (x0;ψ) , ∀||u||2 ≤ 1 } , i.e., the radius of
maximum inclined ball of DfP (x0;ψ) centered around x0. We will omit P and f when it is clear from the context and write D (G(x0);ψ) and r(x0;ψ) instead. Remark 2. In Definition 3.2, the robust region (resp. radius) is defined for each class (resp. point). When using the point with highest P (x̂0 = x|x̂t = xa,t) as the reversed sample, ψ := t.
Now given a sample x0 with ground-truth label, we are ready to characterize the robust region D (G(x0);ψ) under purification model P(·; t) and classifier f . Intuitively, if the adversarial sample xa is near to x0 (in Euclidean distance), xa keeps the same label semantics of x0 and so as the purified sample P(xa; t), which implies that f (P(xa;ψ)) = f(x0). However, the condition that xa is near to x0 is sufficient but not necessary since we can still achieve f (P(xa;ψ)) = f(x0) if xa is near to any sample x̃0 with f (P(x̃a;ψ)) = f(x0). In the following, we will show that the robust region D (G(x0);ψ) is the union of the convex robust sub-regions surrounding every x̃0 with the same label as x0. The following theorem characterizes the convex robust sub-region and robust region respectively. Theorem 3.3. Under conditions B.1 and classifier f , let x0 be the sample with ground-truth label and xa be the adversarial sample, then (i) the purified sample P(xa; t) will have the ground-truth label if xa falls into the following convex set,
Dsub (x0; t) := ⋂
{x′0:f(x′0 )̸=f(x0)}
{ xa : (xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + ||x′0 − x0||22
2
} ,
and further, (ii) the purified sample P(xa; t) will have the ground-truth label if and only if xa falls into the following set, D (G(x0); t) := ⋃ x̃0:f(x̃0)=f(x0)
Dsub (x̃0; t). In other words, D (G(x0); t) is the robust region for data region G(x0) under P(·; t) and f .
Proof. (sketch) (i). Each convex half-space defined by the inequality corresponds to a x′0 such that f(x′0) ̸= f(x0) where xa within satisfies P (x̂0 = x0|x̂t = xa,t) > P (x̂0 = x′0 | x̂t = xa,t). This implies that P(xa; t) ̸= x′0 and f (P(xa;ψ)) = f(x0). The convexity is due to that the intersection of convex sets is convex. (ii). The “if” follows directly from (i). The “only if” holds because if xa /∈ D (G(x0); t), then exists x̃1 such that f(x̃1) ̸= f(x0) and P (x̂0 = x̃1|x̂t = xa,t) > P (x̂0 = x̃0|x̂t = xa,t) ,∀x̃0 s.t. f(x̃0) = f(x0), and thus f (P(xa;ψ)) ̸= f(x0).
Remark 3. Theorem 3.3 implies that when data region G(x0) has higher data density and larger distances to data regions with other labels, it tends to have larger robust region and points in data region tends to have larger radius. Since adversarial attack typically has small magnitude, with large robust region, the adversarial sample can be recovered to the clean sample with a high probability.
In the literature, people focus more on the robust radius (lower bound) r (G(x0); t) (Cohen et al., 2019; Carlini et al., 2022), which can be obtained by finding the maximum inclined ball inside D (G(x0); t) centering x0. Note that although Dsub (x0; t) is convex, D (G(x0); t) is generally not. Therefore, finding r (G(x0); t) is a non-convex optimization problem. In particular, it can be formulated into a disjunctive optimization problem with integer indicator variables, which is typically NP-hard to solve. One alternative could be finding the maximum inclined ball in Dsub (x0; t), which can be formulated into a convex optimization problem whose optimal value provides a lower bound for r (G(x0); t). However, D (G(x0); t) has the potential to provide much larger robustness radius because it might connect different convex robust sub-regions into one, as shown in Figure 2.
⋃3
i=1Dsub(xi; t), where x0,x1,x2 are
samples with ground-truth label and x3 is a sample with another label. xa = x0+ϵa is an adversarial sample such that P(xa; t) = x1 ̸= x0 and thus the classification is correct but xa is not reversed back to x0. rsub(x0) < r(x0) shows our claim that the union leads to a larger robust radius.
In practice, we cannot guarantee to establish an exact reverse process like reverse-SDE but instead try to establish an approximate reverse process to mimic the exact one. As long as the approximate reverse process is close enough to the exact reverse process, they will generate close enough conditional distributions based on the adversarial sample. Then the density and locations of the data regions in two conditional distributions will not differ much and so is the robust region for each data region. We take the score-based diffusion model in Song et al. (2021b) for an example and demonstrate Theorem 3.4 to bound the KL-divergnece between conditional distributions generated by reverse-SDE and score-based diffusion model. Ho et al. (2020) showed that using variational inference to fit DDPM is equivalent to optimizing an objective resembling score-based diffusion model with a specific weighting scheme, so the results can be extended to DDPM. Theorem 3.4. Under score-based diffusion model Song et al. (2021b) and conditions B.1, we have DKL(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) = JSM(θ, t;λ(·)), where {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] are stochastic processes generated by reverse-SDE and score-based diffusion model respectively, JSM(θ, t;λ(·)) := 12 ∫ t 0 Epτ (x) [ λ(τ) ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ, sθ(x, τ) is the score function to approximate ∇x log pτ (x), and λ : R → R is any weighting scheme used in the training score-based diffusion models.
Proof. (sketch) Let µt and νt be the path measure for reverse processes {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively based on the xa,t. Under conditions B.1, µt and νt are uniquely defined and the KLdivergence can be computed via the Girsanov theorem Oksendal (2013).
Remark 4. Theorem 3.4 shows that if the training loss is smaller, the conditional distributions generated by reverse-SDE and score-based diffusion model are closer, and are the same if the training loss is zero. Furthermore, by the Pinsker’s inequality, the total variation (a distance metric) is upper
bounded by DTV(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) ≤ √ 1 2JSM(θ, t;λ(·)).
4 DENSEPURE
Inspired by the theoretical analysis, we introduce DensePure and show how to calculate its certified robustness radius via the randomized smoothing algorithm.
Framework. Our framework, DensePure, consists of two components: (1) an off-the-shelf diffusion model with reverse process rev and (2) an off-the-shelf base classifier f .
The pipeline of DensePure is shown in Figure 1. Given an input x, we feed it into the reverse process rev of the diffusion model to get the reversed sample rev(x) and then repeat the above process K times to get K reversed samples {rev(x)1, · · · , rev(x)K}. We feed the above K reversed samples into the classifier to get the corresponding prediction {f(rev(x)1), · · · , f(rev(x)K)} and then apply the majority vote, termed MV, on these predictions to get the final predicted label ŷ = MV({f(rev(x)1), · · · , f(rev(x)K)}) = argmaxc ∑K i=1 1{f(rev(x)i) = c} .
Certified Robustness of DensePure with Randomized Smoothing.
In this paragraph, we will illustrate the algorithm to calculate certified robustness of DensePure via RS, which offers robustness guarantees for a model under a L2-norm ball.
In particular, we follow the similar setting of Carlini et al. (2022) which uses a DDPM-based diffusion model. The overall algorithm contains three steps:
(1) Our framework estimates n, the number of steps used for the reverse process of DDPM-based diffusion model. Since Randomized Smoothing (Cohen et al., 2019) adds Gaussian noise ϵ, where ϵ ∼ N (0, σ2I), to data input x to get the randomized data input, xrs = x+ ϵ, we map between the noise required by the randomized example xrs and the noise required by the diffused data xn (i.e., xn ∼ N (xn; √ αnx0, (1 − αn)I)) with n step diffusion processing so that αn = 11+σ2 . In this way, we can compute the corresponding timestep n, where n = argmins{|αs − 11+σ2 | | s ∈ [N ]}. (2). Given the above calculated timestep n, we scale xrs with √ αn to obtain the scaled randomized
smoothing sample √ αnxrs. Then we feed √ αnxrs into the reverse process of the diffusion model by K-times to get the reversed sample set {x̂10, x̂20, · · · , x̂i0, · · · , x̂K0 }. (3). We feed the obtained reversed sample set into a standard off-the-shelf classifier f to get the corresponding predicted labels {f(x̂10), f(x̂20), . . . , f(x̂i0), . . . , f(x̂K0 )}, and apply majority vote, denoted MV(· · ·), on these predicted labels to get the final label for xrs. Fast Sampling. To calculate the reversed sample, the standard reverse process of DDPM-based models require repeatedly applying a “single-step” operation n times to get the reversed sample x̂0 (i.e., x̂0 = Reverse(· · ·Reverse(· · ·Reverse(Reverse( √ αnxrs;n);n − 1); · · · ; i); · · · 1)). Here x̂i−1 = Reverse(x̂i; i) is equivalent to sample x̂i−1 from N (x̂i−1;µθ(x̂i, i),Σθ(x̂i, i)), where µθ(x̂i, i) =
1√ 1−βi ( x̂i − βi√1−αi ϵθ(x̂i, i) ) and Σθ := exp(v log βi + (1− v) log β̃i). Here v is a
parameter learned by DDPM and β̃i = 1−αi−1 1−αi .
To reduce the time complexity, we use the uniform sub-sampling strategy from Nichol & Dhariwal (2021). We uniformly sample a subsequence with size b from the original N -step the reverse process. Note that Carlini et al. (2022) set b = 1 for the “one-shot” sampling, in this way, x̂0 =
1√ αn
(xn − √ 1− αnϵθ( √ αnxrs, n)) is a deterministic value so that the reverse process does
not obtain a posterior data distribution conditioned on the input. Instead, we can tune the number of the sub-sampled DDPM steps to be larger than one (b > 1) to sample from a posterior data distribution conditioned on the input. The details about the fast sampling are shown in appendix C.2.
5 EXPERIMENTS
In this section, we use DensePure to evaluate certified robustness on two standard datasets, CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009).
Experimental settings We follow the experimental setting from Carlini et al. (2022). Specifically, for CIFAR-10, we use the 50-M unconditional improved diffusion model from Nichol & Dhariwal (2021) as the diffusion model. We select ViT-B/16 model Dosovitskiy et al. (2020) pretrained on ImageNet-21k and finetuned on CIFAR-10 as the classifier, which could achieve 97.9% accuracy on CIFAR-10. For ImageNet, we use the unconditional 256×256 guided diffusion model from Dhariwal & Nichol (2021) as the diffusion model and pretrained BEiT large model (Bao et al., 2021) trained on ImageNet-21k as the classifier, which could achieve 88.6% top-1 accuracy on validation set of ImageNet-1k. We select three different noise levels σ ∈ {0.25, 0.5, 1.0} for certification. For the parameters of DensePure , we set K = 40 and b = 10 except the results in ablation study. The details about the baselines are in the appendix.
5.1 MAIN RESULTS
We perform DensePure on the subset of CIFAR-10 or ImageNet. We choose the same subset as in Cohen et al. (2019), 500 samples for CIFAR-10 and 100 samples for ImageNet ( the results with 500 samples are shown in the appendix D.10). The results are shown in Table 1. For CIFAR10, comparing with the models which are carefully trained with randomized smoothing techniques in an end-to-end manner (i.e., w/o off-the-shelf classifier), we observe that our method with the standard off-the-shelf classifier outperforms them at smaller ϵ = {0.25, 0.5} on both CIFAR-10 and ImageNet datasets while achieves comparable performance at larger ϵ = {0.75, 1.0}. Comparing with the non-diffusion model based methods with off-the-shelf classifier (i.e., Denoised (Salman et al., 2020) and Lee (Lee, 2021)), both our method and Carlini et al. (2022) are significantly better
than them. These results verify the non-trivial adversarial robustness improvements introduced from the diffusion model. For ImageNet, our method is consistently better than all priors with a large margin.
Since both Carlini et al. (2022) and DensePure use the diffusion model, to better understand the importance of our design, that approximates the label of the high density region in the conditional distribution, we compare DensePure with Carlini et al. (2022) in a more fine-grained manner.
We show detailed certified robustness of the model among different σ at different radius for CIFAR10 in Figure 3-left and for ImageNet in Figure 3-right. We also present our results of certified accuracy at different ϵ in Appendix D.3. From these results, we find that our method is still consistently better at most ϵ (except ϵ = 0) among different σ. The performance margin between ours and Carlini et al. (2022) will become even larger with a large ϵ. These results further indicate that although the diffusion model improves model robustness, leveraging the posterior data distribution conditioned on the input instance (like DensePure ) via reverse process instead of using single sample ((Carlini et al., 2022)) is the key for better robustness. Additionally, we use the off-the-shelf classifiers, which are the VIT-based architectures trained a larger dataset. In the later ablation study section, we select the CNN-based architecture wide-ResNet trained on standard dataset from scratch. Our method still achieves non-trivial robustness. Further, our experiments in Appendix D.7 shows that removing the diffusion model from DensePure deteriorates the performance. It further verifies that our design is non-trivial.
5.2 ABLATION STUDY
Voting samples (K) We first show how K affects the certified accuracy. For efficiency, we select b = 10. We conduct experiments for both datasets. We show the certified accuracy among different r at σ = 0.25 in Figure 4. The results for σ = 0.5, 1.0 and CIFAR-10 are shown in the Appendix D.4. Comparing with the baseline (Carlini et al., 2022), we find that a larger majority vote number leads to a better certified accuracy. It verifies that DensePure indeed benefits the adversarial robustness and making a good approximation of the label with high density region requires a large number of voting samples. We find that our certified accuracy will almost converge at r = 40. Thus, we set r = 40 for our experiments. The results with other σ show the similar tendency. To further improve the time efficiency, we can use K-Consensus (Horváth et al., 2021). It accelerates the majority vote process by 45% ∼ 60% with a negligible performance drop. The experimental details and results are in Appendix D.8.
Fast sampling steps (b) To investigate the role of b, we conduct additional experiments with b ∈ {2, 5} at σ = 0.25. The results on ImageNet are shown in Figure 4 and results for σ = 0.5, 1.0 and CIFAR-10 are shown in the Appendix D.5. By observing results with majority vote, we find that a larger b can lead to a better certified accuracy since a larger b generates images with higher quality. By observing results without majority vote, the results show opposite conclusions where a larger b leads to a lower certified accuracy, which contradicts to our intuition. We guess the potential reason is that though more sampling steps can normally lead to better image recovery quality, it also brings more randomness, increasing the probability that the reversed image locates into a data region with the wrong label. These results further verify that majority vote is necessary for a better performance.
Different architectures One advantage of DensePure is to use the off-the-shelf classifier so that it can plug in any classifier. We choose Convolutional neural network (CNN)-based architectures: Wide-ResNet28-10 (Zagoruyko & Komodakis, 2016) for CIFAR-10 with 95.1% accuracy and WideResNet50-2 for ImageNet with 81.5% top-1 accuracy, at σ = 0.25. The results are shown in Table 2 and Figure E in Appendix D.6. Results for more model architectures and σ of ImageNet are also shown in Appendix D.6. We show that our method can enhance the certified robustness of any given classifier trained on the original data distribution. Noticeably, although the performance of CNNbased classifier is lower than Transformer-based classifier, DensePure with CNN-based model as the classifier can outperform Carlini et al. (2022) with ViT-based model as the classifier (except ϵ = 0 for CIFAR-10).
6 RELATED WORK
Using an off-the-shelf generative model to purify adversarial perturbations has become an important direction in adversarial defense. Previous works have developed various purification methods based on different generative models, such as GANs (Samangouei et al., 2018), autoregressive generative models (Song et al., 2018), and energy-based models (Du & Mordatch, 2019; Grathwohl et al., 2020; Hill et al., 2021). More recently, as diffusion models (or score-based models) achieve better generation quality than other generative models (Ho et al., 2020; Dhariwal & Nichol, 2021), many works consider using diffusion models for adversarial purification (Nie et al., 2022; Wu et al., 2022; Sun et al., 2022) Although they have found good empirical results in defending against existing adversarial attacks (Nie et al., 2022), there is no provable guarantee about the robustness about such methods. On the other hand, certified defenses provide guarantees of robustness (Mirman et al., 2018; Cohen et al., 2019; Lecuyer et al., 2019; Salman et al., 2020; Horváth et al., 2021; Zhang et al., 2018; Raghunathan et al., 2018a;b; Salman et al., 2019b; Wang et al., 2021). They provide a lower bounder of model accuracy under constrained perturbations. Among them, approaches Lecuyer et al. (2019); Cohen et al. (2019); Salman et al. (2019a); Jeong & Shin (2020); Zhai et al. (2020); Horváth et al. (2021); Jeong et al. (2021); Salman et al. (2020); Lee (2021); Carlini et al. (2022) based on randomized smoothing (Cohen et al., 2019) show the great scalability and achieve promising performance on large network and dataset. The most similar work to us is Carlini et al. (2022), which uses diffusion models combined with standard classifiers for certified defense. They view diffusion model as blackbox without having a theoretical under- standing of why and how the diffusion models contribute to such nontrivial certified robustness.
7 CONCLUSION
In this work, we theoretically prove that the diffusion model could purify adversarial examples back to the corresponding clean sample with high probability, as long as the data density of the corresponding clean samples is high enough. Our theoretical analysis characterizes the conditional distribution of the reversed samples given the adversarial input, generated by the diffusion model reverse process. Using the highest density point in the conditional distribution as the deterministic reversed sample, we identify the robust region of a given instance under the diffusion model reverse process, which is potentially much larger than previous methods. Our analysis inspires us to propose an effective pipeline DensePure, for adversarial robustness. We conduct comprehensive experiments to show the effectiveness of DensePure by evaluating the certified robustness via the randomized smoothing algorithm. Note that DensePure is an off-the-shelf pipeline that does not require training a smooth classifier. Our results show that DensePure achieves the new SOTA certified robustness for perturbation with L2-norm. We hope that our work sheds light on an in-depth understanding of the diffusion model for adversarial robustness.
Limitations. The time complexity of DensePure is high since it requires repeating the reverse process multiple times. In this paper, we use fast sampling to reduce the time complexity and show that the setting (b = 2 and K = 10) can achieve nontrivial certified accuracy. We leave the more advanced fast sampling strategy as the future direction.
ETHICS STATEMENT
Our work can positively impact the society by improving the robustness and security of AI systems. We have not involved human subjects or data set releases; instead, we carefully follow the provided licenses of existing data and models for developing and evaluating our method.
8 ACKNOWLEDGMENT
We thank the support of NSF grant No.1910100, NSF CNS 2046726, C3 AI and DHS under grant No. 17STQAC00001-06-00, DARPA under grant N66001-15-C-4066, the Center for Long-Term Cybersecurity, and Berkeley Deep Drive. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors, and do not necessarily reflect the views of the sponsors.
REPRODUCIBILITY STATEMENT
For theoretical analysis, all necessary assumptions are listed in B.1 and the complete proofs are included in B.2. The experimental setting and datasets are provided in section 5. The pseudo-code for DensePure is in C.1 and the fast sampling procedures are provided in C.2.
APPENDIX
Here is the appendix.
A NOTATIONS
p data distribution
P(A) probability of event A Ck set of functions with continuous k-th derivatives w(t) standard Wiener Process
w(t) reverse-time standard Wiener Process
h(x, t) drift coefficient in SDE
g(t) diffusion coefficient in SDE
αt scaling coefficient at time t
σ2t variance of added Gaussian noise at time t
{xt}t∈[0,1] diffusion process generated by SDE {x̂t}t∈[0,1] reverse process generated by reverse-SDE pt distribution of xt and x̂t {x1,x2, . . . ,xN} diffusion process generated by DDPM {βi}Ni=1 pre-defined noise scales in DDPM ϵa adversarial attack
xa adversarial sample
xa,t scaled adversarial sample
f(·) classifier g(·) smoothed classifier P (x̂0 = x|x̂t = xa,t) density of conditional distribution generated by reverseSDE based on xa,t P(xa; t) purification model with highest density point G(x0) data region with the same label as x0 DfP(G(x0); t) robust region for G(x0) associated with base classifier f and purification model P rfP(x0; t) robust radius for the point associated with base classifier f and purification model P Dsub(x0; t) convex robust sub-region sθ(x, t) score function {xθt }t∈[0,1] reverse process generated by score-based diffusion model P ( xθ0 = x|xθt = xa,t
) density of conditional distribution generated by scorebased diffusion model based on xa,t
λ(τ) weighting scheme of training loss for score-based diffusion model
JSM(θ, t;λ(·)) truncated training loss for score-based diffusion model µt,νt path measure for {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively
B MORE DETAILS ABOUT THEORETICAL ANALYSIS
B.1 ASSUMPTIONS
(i) The data distribution p ∈ C2 and Ex∼p[||x||22] <∞.
(ii) ∀t ∈ [0, T ] : h(·, t) ∈ C1,∃C > 0,∀x ∈ Rn, t ∈ [0, T ] : ||h(x, t)||2 ⩽ C (1 + ||x||2). (iii) ∃C > 0,∀x,y ∈ Rn : ||h(x, t)− h(y, t)||2 ⩽ C∥x− y∥2. (iv) g ∈ C and ∀t ∈ [0, T ], |g(t)| > 0. (v) ∀t ∈ [0, T ] : sθ(·, t) ∈ C1,∃C > 0,∀x ∈ Rn, t ∈ [0, T ] : ||sθ(x, t)||2 ⩽ C (1 + ||x||2).
(vi) ∃C > 0,∀x,y ∈ Rn : ||sθ(x, t)− sθ(y, t)||2 ⩽ C∥x− y∥2.
B.2 THEOREMS AND PROOFS
Theorem 3.1. Under conditions B.1, solving equation reverse-SDE starting from time t and point xa,t = √ αtxa will generate a reversed random variable x̂0 with conditional distribution
P (x̂0 = x|x̂t = xa,t) ∝ p(x) · 1√
(2πσ2t ) n e
−||x−xa||22 2σ2t
where σ2t = 1−αt αt
is the variance of the Gaussian noise added at timestamp t in the diffusion process SDE.
Proof. Under the assumption, we know {xt}t∈[0,1] and {x̂t}t∈[0,1] follow the same distribution, which means
P (x̂0 = x|x̂t = xa,t) = P(x̂0 = x, x̂t = xa,t)
P(x̂t = xa,t)
= P(x0 = x,xt = xa,t)
P(xt = xa,t)
= P (x0 = x) P(xt = xa,t|x0 = x)
P(xt = xa,t)
∝ P (x0 = x) 1√
(2πσ2t ) n e
−||x−xa||22 2σ2t
= p(x) · 1√ (2πσ2t ) n e −||x−xa||22 2σ2t
where the third equation is due to the chain rule of probability and the last equation is a result of the diffusion process.
Theorem 3.3. Under conditions B.1 and classifier f , let x0 be the sample with ground-truth label and xa be the adversarial sample, then (i) the purified sample P(xa; t) will have the ground-truth label if xa falls into the following convex set, Dsub (x0; t) := ⋂
{x′0:f(x′0 )̸=f(x0)}
{ xa : (xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + ||x′0 − x0||22
2
} ,
and further, (ii) the purified sample P(xa; t) will have the ground-truth label if and only if xa falls into the following set, D (G(x0); t) := ⋃ x̃0:f(x̃0)=f(x0)
Dsub (x̃0; t). In other words, D (G(x0); t) is the robust region for data region G(x0) under P(·; t) and f .
Proof. We start with part (i).
The main idea is to prove that a point x′0 such that f(x ′ 0) ̸= f(x0) should have lower density than x0 in the conditional distribution in Theorem 3.1 so that P(xa; t) cannot be x′0. In other words, we should have
P (x̂0 = x0|x̂t = xa,t) > P (x̂0 = x′0 | x̂t = xa,t) .
By Theorem 3.1, this is equivalent to
p(x0) · 1√
(2πσ2t ) n e
−||x0−xa|| 2 2 2σ2t > p(x′0) · 1√
(2πσ2t ) n e
−||x′0−xa|| 2 2
2σ2t
⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( ||x0 − xa||22 − ||x′0 − xa||22 ) ⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( ||x0 − xa||22 − ||x′0 − x0 + x0 − xa||22 ) ⇔ log ( p(x0)
p(x′0)
) > 1
2σ2t
( 2(xa − x0)⊤(x′0 − x0)− ∥x′0 − x0∥22 ) .
Re-organizing the above inequality, we obtain
(xa − x0)⊤(x′0 − x0) < σ2t log ( p(x0)
p(x′0)
) + 1
2 ||x′0 − x0||22.
Note that the order of xa is at most one in every term of the above inequality, so the inequality actually defines a half-space in Rn for every (x0,x′0) pair. Further, we have to satisfy the inequality for every x′0 such that f(x ′ 0) ̸= f(x0), therefore, by intersecting over all such half-spaces, we obtain a convex Dsub (x0; t). Then we prove part (ii).
On the one hand, if xa ∈ D (G(x0); t), then there exists one x̃0 such that f(x̃0) = f(x0) and xa ∈ Dsub (x̃0; t). By part (i), x̃0 has higher probability than all other points with different labels from x0 in the conditional distribution P (x̂0 = x|x̂t = xa,t) characterized by Theorem 3.1. Therefore, P(xa; t) should have the same label as x0. On the other hand, if xa /∈ D (G(x0); t), then there is a point x̃1 with different label from x0 such that for any x̃0 with the same label as x0, P (x̂0 = x̃1|x̂t = xa,t) > P (x̂0 = x̃0|x̂t = xa,t). In other words, P(xa; t) would have different label from x0.
Theorem 3.4. Under score-based diffusion model Song et al. (2021b) and conditions B.1, we can bound
DKL(P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t)) = JSM(θ, t;λ(·))
where {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] are stochastic processes generated by reverse-SDE and scorebased diffusion model respectively,
JSM(θ, t;λ(·)) := 1
2 ∫ t 0 Epτ (x) [ λ(τ) ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ,
sθ(x, τ) is the score function to approximate∇x log pτ (x), and λ : R→ R is any weighting scheme used in the training score-based diffusion models.
Proof. Similar to proof of (Song et al., 2021a, Theorem 1), let µt and νt be the path measure for reverse processes {x̂τ}τ∈[0,t] and {xθτ}τ∈[0,t] respectively based on the scaled adversarial sample xa,t. Under conditions B.1, the KL-divergence can be computed via the Girsanov theorem Oksendal
(2013): DKL ( P(x̂0 = x | x̂t = xa,t)∥P(xθ0 = x | xθt = xa,t) ) = − Eµt [ log
dνt dµt ] (i) = Eµt [∫ t 0 g(τ) (∇x log pτ (x)− sθ(x, τ)) dwτ + 1 2 ∫ t 0 g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 dτ ]
= Eµt [ 1
2 ∫ t 0 g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 dτ ]
= 1
2 ∫ τ 0 Epτ (x) [ g(τ)2 ∥∇x log pτ (x)− sθ(x, τ)∥22 ] dτ
= JSM ( θ, t; g(·)2 ) where (i) is due to Girsanov Theorem and (ii) is due to the martingale property of Itô integrals.
C MORE DETAILS ABOUT DENSEPURE
C.1 PSEUDO-CODE
We provide the pseudo code of DensePure in Algo. 1 and Alg. 2
Algorithm 1 DensePure pseudo-code with the highest density point 1: Initialization: choose off-the-shelf diffusion model and classifier f , choose ψ = t, 2: Input sample xa = x0 + ϵa 3: Compute x̂0 = P(xa;ψ) 4: ŷ = f(x̂0)
Algorithm 2 DensePure pseudo-code with majority vote 1: Initialization: choose off-the-shelf diffusion model and classifier f , choose σ 2: Compute αn = 11+σ2 , n = argmins {∣∣∣αs − 11+σ2 ∣∣∣ | s ∈ {1, 2, · · · , N}} 3: Generate input sample xrs = x0 + ϵ, ϵ ∼ N (0, σ2I) 4: Choose schedule Sb, get x̂i0 ← rev( √ αnxrs)i, i = 1, 2, . . . ,K with Fast Sampling
5: ŷ = MV({f(x̂10), . . . , f(x̂K0 )}) = argmaxc ∑K i=1 1{f(x̂i0) = c}
C.2 DETAILS ABOUT FAST SAMPLING
Applying single-step operation n times is a time-consuming process. In order to reduce the time complexity, we follow the method used in (Nichol & Dhariwal, 2021) and sample a subsequence Sb with b values (i.e., Sb = {n, ⌊n− n
b ⌋, · · · , 1}︸ ︷︷ ︸
b
, where Sbj is the j-th element in S b and Sbj =
⌊n − jnb ⌋,∀j < b and S b b = 1) from the original schedule S (i.e., S = {n, n− 1, · · · , 1}︸ ︷︷ ︸
n
, where
Sj = j is the j-th element in S).
Within this context, we adapt the original α schedule αS = {α1, · · · , αi, · · · , αn} used for singlestep to the new schedule αS b
= {αSb1 , · · · , αSbj , · · · , αSbb} (i.e., α Sb i = αSbi = αS⌊n− inb ⌋ is the
i-th element in αS b ). We calculate the corresponding βS b = {βSb1 , βS b 2 , · · · , βS b i , · · · , βS b b } and β̃S b
= {β̃Sb1 , β̃S b 2 , · · · , β̃S b i , · · · , β̃S b b } schedules, where βSbi = β Sb i = 1 − αS
b
i
αS b i−1 , β̃Sbi =
β̃S b i = 1−αS
b
i−1
1−αSbi βSbi . With these new schedules, we can use b times reverse steps to calculate
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 88.0 73.8 56.2 41.6 0.0 Carlini (Carlini et al., 2022) σ = 0.5 74.2 62.0 50.4 40.2 31.0
σ = 1.0 49.4 41.4 34.2 27.8 21.8
σ = 0.25 87.6(-0.4) 76.6(+2.8) 64.6(+8.4) 50.4(+8.8) 0.0(+0.0) Ours σ = 0.5 73.6(-0.6) 65.4(+3.4) 55.6(+5.2) 46.0(+5.8) 37.4(+6.4)
σ = 1.0 55.0(+5.6) 47.8(+6.4) 40.8(+6.6) 33.0(+5.2) 28.2(+6.4)
Table A: Certified accuracy compared with Carlini et al. (2022) for CIFAR-10 at all σ. The numbers in the bracket are the difference of certified accuracy between two methods. Our diffusion model and classifier are the same as Carlini et al. (2022).
x̂0 = Reverse(· · ·Reverse(Reverse(xn;Sbb);Sbb−1); · · · ; 1)︸ ︷︷ ︸ b . Since Σθ(xSbi , S b i ) is parameterized as a range between βS b and β̃S b
, it will automatically be rescaled. Thus, x̂Sbi−1 = Reverse(x̂Sbi ;S b i )
is equivalent to sample xSbi−1 from N (xSbi−1 ;µθ(xSbi , S b i ),Σθ(xSbi , S b i )).
D MORE EXPERIMENTAL DETAILS AND RESULTS
D.1 IMPLEMENTATION DETAILS
We select three different noise levels σ ∈ {0.25, 0.5, 1.0} for certification. For the parameters of DensePure , The sampling numbers when computing the certified radius are n = 100, 000 for CIFAR-10 and n = 10, 000 for ImageNet. We evaluate the certified robustness on 500 samples subset of CIFAR-10 testset and 100 samples subset of ImageNet validation set. we set K = 40 and b = 10 except the results in ablation study.
D.2 BASELINES.
We select randomized smoothing based methods including PixelDP (Lecuyer et al., 2019), RS (Cohen et al., 2019), SmoothAdv (Salman et al., 2019a), Consistency (Jeong & Shin, 2020), MACER (Zhai et al., 2020), Boosting (Horváth et al., 2021) , SmoothMix (Jeong et al., 2021), Denoised (Salman et al., 2020), Lee (Lee, 2021), Carlini (Carlini et al., 2022) as our baselines. Among them, PixelDP, RS, SmoothAdv, Consistency, MACER, and SmoothMix require training a smooth classifier for a better certification performance while the others do not. Salman et al. and Lee use the off-the-shelf classifier but without using the diffusion model. The most similar one compared with us is Carlini et al., which also uses both the off-the-shelf diffusion model and classifier. The above two settings mainly refer to Carlini et al. (2022), which makes us easier to compared with their results.
D.3 MAIN RESULTS FOR CERTIFIED ACCURACY
We compare with Carlini et al. (2022) in a more fine-grained version. We provide results of certified accuracy at different ϵ in Table A for CIFAR-10 and Table B for ImageNet. We include the accuracy difference between ours and Carlini et al. (2022) in the bracket in Tables. We can observe from the tables that the certified accuracy of our method outperforms Carlini et al. (2022) except ϵ = 0 at σ = 0.25, 0.5 for CIFAR-10.
D.4 EXPERIMENTS FOR VOTING SAMPLES
Here we provide more experiments with σ ∈ {0.5, 1.0} and b = 10 for different voting samplesK in Figure A and Figure B. The results for CIFAR-10 is in Figure G. We can draw the same conclusion mentioned in the main context .
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 77.0 71.0 0.0 0.0 0.0 0.0 Carlini (Carlini et al., 2022) σ = 0.5 74.0 67.0 54.0 46.0 0.0 0.0
σ = 1.0 59.0 53.0 49.0 38.0 29.0 22.0
σ = 0.25 80.0(+3.0) 76.0(+5.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) Ours σ = 0.5 75.0(+1.0) 72.0(+5.0) 62.0(+8.0) 49.0(+3.0) 0.0(+0.0) 0.0(+0.0)
σ = 1.0 61.0(+2.0) 57.0(+4.0) 53.0(+4.0) 49.0(+11.0) 37.0(+8.0) 26.0(+4.0)
Table B: Certified accuracy compared with Carlini et al. (2022) for ImageNet at all σ. The numbers in the bracket are the difference of certified accuracy between two methods. Our diffusion model and classifier are the same as Carlini et al. (2022).
CIFAR=10 ImageNet
Figure A: Certified accuracy among different vote numbers with different radius. Each line in the figure represents the certified accuracy among different vote numbers K with Gaussian noise σ = 0.50.
D.5 EXPERIMENTS FOR FAST SAMPLING STEPS
We also implement additional experiments with b ∈ {1, 2, 10} at σ = 0.5, 1.0. The results are shown in Figure C and Figure D. The results for CIFAR-10 are in Figure G. We draw the same conclusion as mentioned in the main context.
D.6 EXPERIMENTS FOR DIFFERENT ARCHITECTURES
We try different model architectures of ImageNet including Wide ResNet-50-2 and ResNet 152 with b = 2 andK = 10. The results are shown in Figure F. We find that our method outperforms (Carlini et al., 2022) for all σ among different classifiers.
D.7 EXPERIMENTS FOR RANDOMIZED SMOOTHING WITHOUT DIFFUSION MODEL
To show the effectiveness of our diffusion model design, we remove the diffusion model from our pipeline and conduct experiments. Specifically, first, we remove the diffusion model and perform randomized smoothing only on the pretrained classifier that we used in DensePure (i.e., ViT-B/16 for CIFAR-10 and BEiT for ImageNet). The results are shown in Table C and Table D. The number in the bracket is the robust accuracy of pretrained classifier - the robust accuracy of DensePure. From the result, we conclude that without the help of diffusion models, neither ViT nor BEiT could reach high certified accuracy.
Second, we conduct additional experiments to fairly compare with randomized smoothing without diffusion models under majority vote settings. Specifically, we activate droppath in BEiT at the inference stage to support majority votes. The other settings are the same as DensePure. The results are shown in Table E. The number in the bracket is calculated by the robust accuracy of BeiT with majority votes - the robust accuracy of DensePure. We find that simply performing majority votes on the BeiT classifier will not result in higher certified robustness.
CIFAR=10 ImageNet
Figure B: Certified accuracy among different vote numbers with different radius. Each line in the figure represents the certified accuracy among different vote numbers K with Gaussian noise σ = 1.00.
CIFAR=10 ImageNet
Figure C: Certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 0.50.
Third, to compare with randomized smoothing without diffusion model, we also evaluate certified accuracy with Gaussian augmentation-trained ViT models on CIFAR-10. The results shown in the table F prove that DensePure can still achieve higher certified accuracy than randomized smoothing on even Gaussian augmented models without diffusion models. The numbers in the bracket are the difference between the robust accuracy of Gaussian augmentation randomized smoothing and DensePure.
D.8 EXPERIMENTS FOR K-CONSENSUS AGGREGATION
To improve the efficient of our algorithm, we try the K-consensus Aggregation, where an early stop will be triggered if the classification results of the K consecutive reversed samples are the same. Here we calculate the certified robustness for 100 subsamples of CIFAR-10 and ImageNet with 2 sampling steps, a maximum 10 majority votes and consensus threshold k=3. Results are shown in Table G and Table H. The column of ”Avg MV” in the tables means the average of the actual number of majority votes required for our algorithm. For instance, if the predicted labels of the first 3 reversed samples are the same, the actual majority vote numbers will be 3. The numbers in the bracket are the difference between certified accuracy w/o K-Consensus Aggregation.
D.9 EXPERIMENTS FOR CERTIFIED ACCURACY WITH LESS SAMPLING STEPS AND VOTE NUMBERS
We also conduct additional experiments with 2 sampling steps and 5 majority votes. The results are shown in Table I. We find that our method still achieves better results than the existing method.
CIFAR=10 ImageNet
Figure D: Certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 1.00.
CIFAR=10 ImageNet
Figure E: Certified accuracy with different architectures. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound with Gaussian noise σ = 0.25.
D.10 EXPERIMENTS FOR DENSEPURE 500 TEST SAMPLING NUMBER RESULTS ON IMAGENET
We increase the ImageNet test sampling number from 100 to 500 and update the experiment results in Table J and Table K. We can draw the similar conclusion.
Wide ResNet-50-2 ResNet152 Figure F: Certified accuracy of ImageNet for different architectures. The lines represent the certified accuracy with different L2 perturbation bound with different Gaussian noise σ ∈ {0.25, 0.50, 1.00}.
ImageNet ImageNet
Figure G: Ablation study. The left image shows the certified accuracy among different vote numbers with different radius ϵ ∈ {0.0, 0.25, 0.5, 0.75}. Each line in the figure represents the certified accuracy of our method among different vote numbers K with Gaussian noise σ = 0.25. The right image shows the certified accuracy with different fast sampling steps b. Each line in the figure shows the certified accuracy among different L2 adversarial perturbation bound.
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 20.8(-66.8) 7.4(-69.2) 1.8(-62.8) 0.2(-50.2) 0.0(+0.0) σ = 0.5 11.6(-62.0) 6.6(-58.8) 3.8(-51.8) 1.2(-44.8) 0.2(-37.2) σ = 1.0 10.6(-44.4) 10.6(-37.4) 9.4(-31.4) 9.4(-23.6) 9.4(-18.8)
Table C: Certified accuracy of randomized smoothing on pretrained classifier ViT-B/16 at all σ for CIFAR-10
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 73.2(-10.8) 55.8(-22.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) σ = 0.5 7.8(-72.4) 4.6(-71.0) 3.2(-63.8) 1.0(-53.6) 0.0(+0.0) 0.0(+0.0) σ = 1.0 0.0(-67.8) 0.0(-61.4) 0.0(-55.6) 0.0(-50.0) 0.0(-42.2) 0.0(-25.8)
Table D: Certified accuracy of randomized smoothing on pretrained classifier BEiT at all σ for ImageNet
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 73.8(-10.2) 58.0(-19.8) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) σ = 0.5 9.0(-71.2) 7.0(-68.6) 4.0(-63.0) 2.0(-52.6) 0.0(+0.0) 0.0(+0.0) σ = 1.0 0.0(-67.8) 0.0(-61.4) 0.0(-55.6) 0.0(-50.0) 0.0(-42.2) 0.0(-25.8)
Table E: Certified accuracy of randomized smoothing on droppatch activated BEiT with 10 majority votes at all σ for ImageNet
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0
σ = 0.25 88.2(+0.6) 71.4(-5.2) 53.2(-11.4) 35.2(-15.2) 0.0(+0.0) σ = 0.5 69.8(-3.8) 60.0(-5.4) 48.4(-7.2) 37.2(-8.8) 27.2(-10.2) σ = 1.0 49.0(-6.0) 41.8(-6.0) 34.0(-6.8) 27.0(-6.0) 22.0(-6.2)
Table F: Certified accuracy of randomized smoothing on Gaussian augmentation-trained ViT at all σ on CIFAR-10
Certified Accuracy at ϵ(%) Noise 0.0 0.25 0.5 0.75 1.0 Avg MV
σ = 0.25 92(+0.0) 77(+0.0) 60(+0.0) 48(-1.0) 0(+0.0) 3.84 σ = 0.5 74(+0.0) 65(+0.0) 53(-1.0) 45(+0.0) 40(+0.0) 4.43 σ = 1.0 53(+0.0) 46(+0.0) 42(+0.0) 31(+0.0) 25(+0.0) 5.49
Table G: Certified accuracy and average majority votes with 2 sample steps and k = 3 consensus threshold at all σ for CIFAR-10.
Certified Accuracy at ϵ(%) Noise 0.0 0.5 1.0 1.5 2.0 3.0 Avg MV
σ = 0.25 78(+0.0) 74(+0.0) 0(+0.0) 0(+0.0) 0(+0.0) 0(+0.0) 3.34 σ = 0.5 75(+0.0) 69(+0.0) 61(+0.0) 47(+0.0) 0(+0.0) 0(+0.0) 3.89 σ = 1.0 60(+0.0) 54(+0.0) 50(+0.0) 41(+0.0) 32(+0.0) 23(+0.0) 5.23
Table H: Certified accuracy and average majority votes with 2 sample steps and k = 3 consensus threshold at all σ for ImageNet.
Certified Accuracy at ϵ(%) CIFAR-10 ImageNet
Method Off-the-shelf 0.25 0.5 0.75 1.0 0.5 1.0 1.5 2.0 3.0
PixelDP (Lecuyer et al., 2019) ✗ (71.0)22.0 (44.0)2.0 - - (33.0)16.0 - - - - RS (Cohen et al., 2019) ✗ (75.0)61.0 (75.0)43.0 (65.0)32.0 (65.0)23.0 (67.0)49.0 (57.0)37.0 (57.0)29.0 (44.0)19.0 (44.0)12.0 SmoothAdv (Salman et al., 2019a) ✗ (82.0)68.0 (76.0)54.0 (68.0)41.0 (64.0)32.0 (63.0)54.0 (56.0)42.0 (56.0)34.0 (41.0)26.0 (41.0)18.0 Consistency (Jeong & Shin, 2020) ✗ (77.8)68.8 (75.8)58.1 (72.9)48.5 (52.3)37.8 (55.0)50.0 (55.0)44.0 (55.0)34.0 (41.0)24.0 (41.0)17.0 MACER (Zhai et al., 2020) ✗ (81.0)71.0 (81.0)59.0 (66.0)46.0 (66.0)38.0 (68.0)57.0 (64.0)43.0 (64.0)31.0 (48.0)25.0 (48.0)14.0 Boosting (Horváth et al., 2021) ✗ (83.4)70.6 (76.8)60.4 (71.6)52.4 (73.0)38.8 (65.6)57.0 (57.0)44.6 (57.0)38.4 (44.6)28.6 (38.6)21.2 SmoothMix (Jeong et al., 2021) ✓ (77.1)67.9 (77.1)57.9 (74.2)47.7 (61.8)37.2 (55.0)50.0 (55.0)43.0 (55.0)38.0 (40.0)26.0 (40.0)17.0
Denoised (Salman et al., 2020) ✓ (72.0)56.0 (62.0)41.0 (62.0)28.0 (44.0)19.0 (60.0)33.0 (38.0)14.0 (38.0)6.0 - - Lee (Lee, 2021) ✓ 60.0 42.0 28.0 19.0 41.0 24.0 11.0 - - Carlini (Carlini et al., 2022) ✓ (88.0)73.8 (88.0)56.2 (88.0)41.6 (74.2)31.0 (82.0)74.0 (77.2.0)59.8 (77.2)47.0 (64.6)31.0 (64.6)19.0 Ours ✓ (87.6)76.6 (87.6)64.6 (87.6)50.4 (73.6)37.4 (84.0)77.8 (80.2)67.0 (80.2)54.6 (67.8)42.2 (67.8)25.8
Table J: Certified accuracy compared with existing works. The certified accuracy at ϵ = 0 for each model is in the parentheses. The certified accuracy for each cell is from the respective papers except Carlini et al. (2022). Our diffusion model and classifier are the same as Carlini et al. (2022), where the off-the-shelf classifier uses ViT-based architectures trained on a large dataset (ImageNet-22k).
Certified Accuracy at ϵ(%) Methods Noise 0.0 0.5 1.0 1.5 2.0 3.0
σ = 0.25 82.0 74.0 0.0 0.0 0.0 0.0 Carlini (Carlini et al., 2022) σ = 0.5 77.2 71.8 59.8 47.0 0.0 0.0
σ = 1.0 64.6 57.8 49.2 40.6 31.0 19.0
σ = 0.25 84.0(+2.0) 77.8(+3.8) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) 0.0(+0.0) Ours σ = 0.5 80.2(+3.0) 75.6(+3.8) 67.0(+7.2) 54.6(+7 | 1. What is the focus and contribution of the paper regarding adversarial attacks and defenses?
2. What are the strengths of the proposed approach, particularly in terms of simplicity and use of existing models?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the effectiveness and efficiency of the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposed to increased robustness to adversarial attacks of an off the shelf classifier by using a (of the shelf) diffusion model as a data augmentation preprocessing step.
Strengths And Weaknesses
Strength:
simplicity of the framework
use of off the shelves diffusion model and classifiers
Clarity, Quality, Novelty And Reproducibility
Sections 1 and 2 are well written and could be understand by a reader unfamiliar with adversarial attacks/robustness (which is my case). I made the educated guest for section 3 and 4. In section 5, experimentation are well described and reproducible. |
ICLR | Title
Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking
Abstract
Deep Neural Networks (DNNs) are increasingly deployed in highly energyconstrained environments such as autonomous drones and wearable devices while at the same time must operate in real-time. Therefore, reducing the energy consumption has become a major design consideration in DNN training. This paper proposes the first end-to-end DNN training framework that provides quantitative energy consumption guarantees via weighted sparse projection and input masking. The key idea is to formulate the DNN training as an optimization problem in which the energy budget imposes a previously unconsidered optimization constraint. We integrate the quantitative DNN energy estimation into the DNN training process to assist the constrained optimization. We prove that an approximate algorithm can be used to efficiently solve the optimization problem. Compared to the best prior energy-saving methods, our framework trains DNNs that provide higher accuracies under same or lower energy budgets.
1 INTRODUCTION
Deep Neural Networks (DNNs) have become the fundamental building blocks of many emerging application domains such as computer vision (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), speech recognition (Hinton et al., 2012), and natural language processing (Goldberg, 2016). Many of these applications have to operate in highly energy-constrained environments. For instance, autonomous drones have to continuously perform computer vision tasks (e.g., object detection) without a constant power supply. Designing DNNs that can meet severe energy budgets has increasingly become a major design objective.
The state-of-the-art model compression algorithms adopt indirect techniques to restrict the energy consumption, such as pruning (or sparsification) (He et al., 2018; Han et al., 2015a; Liu et al., 2015; Zhou et al., 2016; Li et al., 2016; Wen et al., 2016) and quantization (Gong et al., 2014; Wu et al., 2016; Han et al., 2015a; Courbariaux et al., 2015; Rastegari et al., 2016). These techniques are agonistic to energy consumption; rather they are designed to reduce the amount of computations and the amount of model parameters in a DNN, which do not truly reflect the energy consumption of a DNN. As a result, these indirect approaches only indirectly reduce the total energy consumption. Recently, Energy-Aware Pruning (EAP) (Yang et al., 2017) proposes a more direct manner to reduce the energy consumption of DNN inferences by guiding weight pruning using DNN energy estimation, which achieves higher energy savings compared to the indirect techniques.
However, a fundamental limitation of all existing methods is that they do not provide quantitative energy guarantees, i.e., ensuring that the energy consumption is below a user-specified energy budget. In this paper, we aspire to answer the following key question: how to design DNN models that satisfy a given energy budget while maximizing the accuracy? This work provides a solution to this question through an end-to-end training framework. By end-to-end, we refer to an approach that directly meets the energy budget without relying heuristics such as selectively restoring pruned weights and layer by layer fine-tuning (Han et al., 2015b; Yang et al., 2017). These heuristics are effective in practice but also have many hyper-parameters that must be carefully tuned.
Our learning algorithm directly trains a DNN model that meets a given energy budget while maximiz-
ing model accuracy without incremental hyper-parameter tuning. The key idea is to formulate the DNN training process as an optimization problem in which the energy budget imposes a previously unconsidered optimization constraint. We integrate the quantitative DNN energy estimation into the DNN training process to assist the constrained optimization. In this way, a DNN model, once is trained, by design meets the energy budget while maximizing the accuracy.
Without losing generality, we model the DNN energy consumption after the popular systolic array hardware architecture (Kung, 1982) that is increasingly adopted in today’s DNN hardware chips such as Google’s Tensor Processing Unit (TPU) (Jouppi et al., 2017), NVidia’s Tensor Cores, and ARM’s ML Processor. The systolic array architecture embodies key design principles of DNN hardware that is already available in today’s consumer devices. We specifically focus on pruning, i.e., controlling the DNN sparsity, as the main energy reduction technique. Overall, the energy model models the DNN inference energy as a function of the sparsity of the layer parameters and the layer input.
Given the DNN energy estimation, we formulate DNN training as an optimization problem that minimizes the accuracy loss under the constraint of a certain energy budget. The key difference between our optimization formulation and the formulation in a conventional DNN training is two-fold. First, our optimization problem considers the energy constraint, which is not present in conventional training. Second, layer inputs are non-trainable parameters in conventional DNN training since they depend on the initial network input. We introduce a new concept, called input mask, that enables the input sparsity to be controlled by a trainable parameter, and thus increases the energy reduction opportunities. This lets us further reduce energy in scenarios with known input data pattern.
We propose an iterative algorithm to solve the above optimization problem. A key step in optimization is the projection operation onto the energy constraint, i.e., finding a model which is closest to the given (dense) model and satisfies the energy constraint. We prove that this projection can be casted into a 0/1 knapsack problem and show that it can be solved very efficiently. Evaluation results show that our proposed training framework can achieve higher accuracy under the same or lower energy compared to the state-of-the-art energy-saving methods.
In summary, we make the following contributions in this paper:
• To the best of our knowledge, this is the first end-to-end DNN training framework that provides quantitative energy guarantees;
• We propose a quantitative model to estimate the energy consumption of DNN inference on TPU-like hardware. The model can be extended to model other forms of DNN hardware;
• We formulate a new optimization problem for energy-constrained DNN training and present a general optimization algorithm that solves the problem.
2 RELATED WORK
Energy-Agnostic Optimizations Most existing DNN optimizations indirectly optimize DNN energy through reducing the model complexity. They are agonistic to the energy consumption, and therefore cannot provide any quantitative energy guarantees.
Pruning, otherwise known as sparsification, is perhaps the most widely used technique to reduce DNN model complexity by reducing computation as well as hardware memory access. It is based on the intuition that DNN model parameters that have low-magnitude have little impact on the final prediction, and thus can be zeroed-out. The classic magnitude-based pruning (Han et al., 2015b) removes weights whose magnitudes are lower than a threshold. Subsequent work guides pruning using special structures (Liu et al., 2015; Zhou et al., 2016; Li et al., 2016; Wen et al., 2016; He et al., 2017), such as removing an entire channel, to better retain accuracy after pruning.
Quantization reduces the number of bits used to encode model parameters, and thus reducing computation energy and data access energy (Gong et al., 2014; Wu et al., 2016; Han et al., 2015a). The extreme case of quantization is using 1-bit to represent model parameters (Courbariaux et al., 2015; Rastegari et al., 2016). Such binary quantization methods are usually trained from scratch instead of quantizing a pre-trained DNN.
Energy-Aware Optimizations Recently, energy-aware pruning (EAP) (Yang et al., 2017) proposes to use a quantitative energy model to guide model pruning. Different from pure magnitude-based pruning methods, EAP selectively prunes the DNN layer that contributes the most to the total energy consumption. It then applies a sequence of fine-tuning techniques to retain model accuracy. The pruning step and fine-tuning step are alternated until the accuracy loss exceeds a given threshold.
Although EAP a promising first-step toward energy-aware optimizations, its key limitation is that it does not provide quantitative energy guarantees because it does not explicitly consider energy budget as a constraint. Our work integrates the energy budget as an optimization constraint in model training.
Latency-Guaranteed Compression Lately, model compression research has started providing guarantees in execution (inference) latency, which theoretically could be extended to providing energy guarantees as well. However, these methods are primarily search-based through either reinforcement learning (He et al., 2018) or greedy-search (Yang et al., 2018). They search the sparsity setting for every single layer to meet the given budget. Thus, they may require a large number of trials to achieve a good performance, and may not ensure that the resulting model accuracy is maximized.
3 MODELING DNN INFERENCE ENERGY CONSUMPTION
This section introduces the model of estimating energy consumption of a single DNN inference. We consider the widely-used feed-forward DNNs. Note that our proposed methodology can be easily extended to other network architectures as well. In this section, we first provide an overview of our energy modeling methodology (Section 3.1). We then present the detailed per-layer energy modeling (Section 3.2 and Section 3.3), which allow us to then derive the overall DNN energy consumption (Section 3.4). Our energy modeling results are validated against the industry-strength DNN hardware simulator ScaleSim (Samajdar et al., 2018).
DNN model sparsity (via pruning) is well recognized to significantly affect the execution efficiency and thus affect the energy consumption of a DNN model (He et al., 2018; Yang et al., 2017; Han et al., 2015a; Liu et al., 2015; Zhou et al., 2016). We thus use pruning as the mechanism to reduce energy consumption1. Note, however, that model sparsity is not the end goal of our paper; rather we focus on reducing the energy consumption directly. Many dedicated DNN hardware chips (a.k.a., Neural Processing Units, NPUs) (Jouppi et al., 2017; Chen et al., 2016; Han et al., 2016; Parashar et al., 2017) have been developed to directly benefit from model sparsity, and are already widely available in today’s consumer devices such as Apple iPhoneX, Huawei Mate 10, and Microsoft HoloLens. Our paper focuses on this general class of popular, widely-used DNN chips.
3.1 ENERGY MODELING OVERVIEW
A DNN typically consists of a sequence of convolution (CONV) layers and fully connected (FC) layers interleaved with a few other layer types such as Rectified Linear Unit (ReLU) and batch normalization. We focus mainly on modeling the energy consumption of the CONV and FC layers. This is because CONV and FC layers comprise more than 90% of the total execution time during a DNN inference (Chen et al., 2016) and are the major energy consumers (Han et al., 2015a; Yang et al., 2017). Energy consumed by other layer types is insignificant and can be taken away from the energy budget as a constant factor.
A DNN inference’s energy consumption is tied to the underlying hardware that performs the inference. In particular, we assume a systolic-array-based DNN hardware architecture. Systolic array (Kung, 1982) has long been know as an effective approach for matrix multiplication. Many DNN hardware architectures adopt the systolic array, most notably the Google Tensor Processing Unit (TPU) (Jouppi
1Quantization is another useful mechanism to reduce energy consumption. It is orthogonal to the pruning mechanism and they could be combined. This paper specifically focuses on the pruning mechanism.
et al., 2017), Nvidia’s Tensor Cores in their most recent Volta GPUs, and ARM’s ML Processor. Targeting systolic-array-based DNN hardware ensures that our approach has a wide applicability. However, our modeling and training strategies can generally be applied to other DNN architectures.
Figure 1 shows the overall hardware architecture. The systolic array comprises of several compute units that perform the Multiply-and-Accumulate (MAC) operation, which conducts the following computation: a← a+ (b× c), where b and c are the two scalar inputs and a is the scalar intermediate result called “partial sum.” MAC operation is the building block for matrix multiplication. The MAC units are organized in a 2-D fashion. The data is fed from the edges, both horizontally and vertically, which then propagate to the MAC units within the same row and columns.
We decompose the energy cost into two parts: computation energy Ecomp and data access energy Edata. Ecomp denotes the energy consumed by computation units, and Edata denotes the energy consumed when accessing data from the hardware memory. Since we mainly use pruning as the energy reduction technique, we now model how Ecomp and Edata are affected by DNN sparsity.
3.2 ENERGY CONSUMPTION FOR COMPUTATION
CONV layers perform convolution and FC layer perform matrix-vector multiplication. Both operations can be generalized to matrix-matrix multiplication, which involves only the MAC operation (Chetlur et al., 2014; Jouppi et al., 2017). Figure 1 illustrates how a matrix-matrix multiplication is carried out on the systolic array hardware. Given X and W , the systolic array computes XW by passing each row of X to each row in the systolic array and passing each column of W to each column in the systolic array. If the width of the systolic array, denoted by sw, is less than the width of W , the hardware will fold W column-wise in strides of sw. Similarly, if the height of X is greater than the height of the systolic array (sh), X is folded row-size in strides of sh. Figure 1 illustrates a 2× 2 systolic array multiplying two 4× 4 matrices. Both matrices are folded twice in strides of 2. Critically, if either inputs of a MAC operation is zero, we can skip the MAC operation entirely and thus save the computation energy. At a high-level, the total computation energy, Ecomp, can be modeled as eMACNMAC, where eMAC denotes the energy consumption of one MAC operation whereas NMAC denotes the total number of MAC operations that are actually performed. The challenge is to identify NMAC for CONV and FC layers, which we discuss below.
Fully connected layer LetX(v) ∈ R1×c be the input vector andW (v) ∈ Rc×d be the weight matrix of the FC layer v. The FC layer performs matrix-vector multiplication X(v)W (v). The number of MAC operations NMAC is sum(supp(X)supp(W )), where supp(T ) returns a binary tensor indicating the nonzero positions of tensor T . So the computation energy for a fully connected layer v:
E(v)comp = eMACsum(supp(X (v))supp(W (v))) ≤ eMAC‖W (v)‖0, (1)
where the equality is reached when the input is dense.
Convolution layer The CONV layer performs the convolution operation between a 4-D weight (also referred to as kernel or filter) tensor and a 3-D input tensor. Let W (u) ∈ Rd×c×r×r be the weight tensor, where d, c, and r are tensor dimension parameters. Let X(u) ∈ Rc×h×w be the input tensor, where h and w are the input height and width. The convolution operation in the CONV layer u generates a 3-dimensional tensor:
(X(u) ∗W (u))j,y,x = c∑ i=1 r−1∑ r′,r′′=0 X (u) i,y+r′,x+r′′W (u) j,i,r′,r′′ , (2)
where x, y indicate the position of the output tensor, which has height h′ = b(h+ 2p− r)/sc+ 1 and width w′ = b(w + 2p− r)/sc+ 1 (p is the convolution padding and s is the convolution stride). Tensor convolution (2) can be seen as a special matrix-matrix multiplication (Chellapilla et al., 2006; Chetlur et al., 2014). Specifically, we would unfold the tensor X(u) to a matrix X̄(u) ∈ Rh′w′×cr2 , and unfold the tensor W (u) to a matrix W̄ (u) ∈ Rcr2×d. X̄(u) and W̄ (u) are then multiplied together in the systolic array to compute the equivalent convolution result between X(u) and W (u).
Nonzero elements inX(u) andW (u) incur actual MAC operations. Thus,NMAC = sum(supp(X(u))∗ supp(W (u))) ≤ h′w′‖W (u)‖0 (the equality means the input is dense), resulting in the following computation energy of a CONV layer u:
E(u)comp = eMACsum(supp(X (u)) ∗ supp(W (u))) ≤ eMACh′w′‖W (u)‖0. (3)
3.3 ENERGY CONSUMPTION FOR DATA ACCESS
Accessing data happens in every layer. The challenge in modeling the data access energy is that modern hardware is equipped with a multi-level memory hierarchy in order to improve speed and save energy (Hennessy & Patterson, 2011). Specifically, the data is originally stored in a large memory, which is slow and energy-hungry. When the data is needed to perform certain computation, the hardware will load it from the large memory into a smaller memory that is faster and consume less energy. If the data is reused often, it will mostly live in the small memory. Thus, such a multi-level memory hierarchy saves overall energy and improves overall speed by exploiting data reuse.
Without losing generality, we model a common, three-level memory hierarchy composed of a Dynamic Random Access Memory (DRAM), a Cache, and a Register File (RF). The cache is split into two halves: one for holding X (i.e., the feature map in a CONV layer and the feature vector in a FC layer) and the other for holding W (i.e., the convolution kernel in a CONV layer and the weight matrix in an FC layer). This is by far the most common memory hierarchy in DNN hardware such as Google’s TPU (Jouppi et al., 2017; Chen et al., 2016; Zhu et al., 2018; Han et al., 2016). Data is always loaded from DRAM into cache, and then from cache to RFs.
In many today’s DNN hardwares, the activations and weights are compressed in the dense form, and thus only non-zero values will be accessed. This is done in prior works (Chen et al., 2016; Parashar et al., 2017). Therefore, if the value of the data that is being loaded is zero, the hardware can skip the data access and thereby save energy. There is a negligible amount of overhead to “unpack” and “pack” compressed data, which we simply take away from the energy budget as a constant factor. This is also the same modeling assumption used by Energy-Aware Pruning (Yang et al., 2017).
To compute Edata, we must calculate the number of data accesses at each memory level, i.e., NDRAM, Ncache, NRF. Let the unit energy costs of different memory hierarchies be eDRAM, ecache, and eRF, respectively, the total data access energy consumptionEdata will be eDRAMNDRAM +ecacheNcache + eRFNRF. We count the number of data accesses for both the weights and input, then combine them together. The detailed derivation of data access energy is included in the Appendix.
3.4 THE OVERALL ENERGY ESTIMATION FORMULATION
Let U and V be the sets of convolutional layers and fully connected layers in a DNN respectively. The superscript (u) and (v) indicate the energy consumption of layer u ∈ U and v ∈ V , respectively. Then the overall energy consumption of a DNN inference can be modeled by
E(X,W ) := ∑ u∈U (E(u)comp + E (u) data) + ∑ v∈V (E(v)comp + E (v) data), (4)
where X stacks input vectors/tensors at all layers and W stacks weight matrices/tensors at all layers.
4 ENERGY-CONSTRAINED DNN MODEL
Given the energy model presented in Section 3, we propose a new energy-constrained DNN model that bounds the energy consumption of a DNN’s inference. Different from prior work on model pruning in which energy reduction is a byproduct of model sparsity, our goal is to directly bound the energy consumption of a DNN while sparsity is just used as a means to reduce energy.
This section formulates training an energy-constrained DNN as an optimization problem. We first formulate the optimization constraint by introducing a trainable mask variable into the energy modeling to enforce layer input sparsity. We then define a new loss function by introducing the knowledge distillation regularizer that helps improve training convergence and reduce overfitting.
Controlling Input Sparsity Using Input Mask The objective of training an energy-constrained DNN is to minimize the accuracy loss while ensuring that the DNN inference energy is below a given budget, Ebudget. Since the total energy consumption is a function of ‖X(u)‖0 and ‖W (u)‖0, it is natural to think that the trainable parameters are X and W . In reality, however, X depends on the input to the DNN (e.g., an input image to an object recognition DNN), and thus is unknown during training time. Therefore, in conventional DNN training frameworks X is never trainable.
To include the sparsity of X in our training framework, we introduce a trainable binary mask M that is of the same shape of X , and is multiplied with X before X is fed into CONV or FC layers, or equivalently, at the end of the previous layer. For example, if the input to a standard CONV layer is X(u), the input would now be X(u) M (u), where denotes the element-wise multiplication. In practice, we do not really do this multiplication but only read X(u) on the nonzero positions of M (u).
Algorithm 1: Energy-Constrained DNN Training. Input: Energy budget Ebudget, learning rates η1, η2, mask sparsity decay step ∆q. Result: DNN weights W ∗, input mask M∗.
1 Initialize W = Wdense,M = 1, q = ‖M‖0 −∆q; 2 while True do
// Update DNN weights 3 while W has not converged do 4 W = W − η1∇̂W L̄(M,W ) ; // SGD step 5 W = PΩ(Ebudget)(W ) ; // Energy constraint projection for weights W 6 end 7 If previous_accuracy > current_accuracy, exit loop with previous W and M ;
// Update input mask 8 while M has not converged do 9 M = M − η2∇̂M L̄(M,W ) ; // SGD step
10 Clamp values of M into [0, 1]: assign 1 (or 0) to the values if they exceeds 1 (or negative); 11 M = P‖M‖0≤q(M) ; // L0 constraint projection for input mask M 12 end 13 Round values of M into {0, 1}; 14 Decay the sparsity constraint q = q −∆q; 15 end 16 W ∗ = W,M∗ = M .
With the trainable mask M , we can ensure that ‖X(u) M (u)‖0 ≤ ‖M (u)‖0, and thereby bound the sparsity of the input at training time. In this way, the optimization constraint during training becomes E(M,W ) ≤ Ebudget, where E(M,W ) denotes the total DNN inference energy consumption, which is a function of X and W (as shown in Equation (4)), and thus a function of M and W .
Knowledge Distillation as a Regularizer Directly optimizing over the constraint would likely lead to a local optimum because the energy model is highly non-convex. Recent works (Mishra & Marr, 2017; Tschannen et al., 2017; Zhuang et al., 2018) notice that knowledge distillation is helpful in training compact DNN models. To improve the training performance, we apply the knowledge distillation loss (Ba & Caruana, 2014) as a regularization to the conventional loss function. Intuitively, the regularization uses a pre-trained dense model to guide the training of a sparse model. Specifically, our regularized loss function is:
L̄λ,Wdense(M,W ) := (1− λ)L(M,W ) + λEX [‖φ(X;W )− φ(X;Wdense)‖2/|φ(·;W )|], (5) where Wdense is the original dense model, and L(M,W ) is the original loss, e.g., cross-entropy loss for classification task. φ(X;W ) is the network’s output (we use the output before the last activation layer as in Ba & Caruana (2014)), |φ(·;W )| is the network output dimensionality and 0 ≤ λ ≤ 1 is a hyper parameter similar to other standard regularizations.
Thus, training an energy-constrained DNN model is formulated as an optimization problem:
min M,W L̄λ,Wdense(M,W ) s.t. E(M,W ) ≤ Ebudget. (6)
5 OPTIMIZATION
This section introduces an algorithm to solve the optimization problem formulated in (6). The overall algorithm is shown in Algorithm 1. Specifically, the algorithm includes three key parts:
• Initialization by training a dense model. That is, Wdense := arg min
W L(M,W ) (7)
• Fix M and optimize W via approximately solving (using Wdense initialization): min W L̄(M,W ) s.t. E(M,W ) ≤ Ebudget (8)
• Fix W and optimize M by approximately solving : min M L̄(M,W ) s.t. ‖M‖0 ≤ q,M ∈ [0,1] (9)
After the initialization step (Line 1 in Algorithm 1), the training algorithm iteratively alternates between the second (Line 3-6 in Algorithm 1) and the third step (Line 8-13 in Algorithm 1) while gradually reducing the sparsity constraint q (Line 14 in Algorithm 1) until the training accuracy converges. Note that Equation (7) is the classic DNN training process, and solving Equation (9) involves only the well-known L0 norm projection P‖M‖0≤q(Q) := arg min‖M‖0≤q ‖M −Q‖
2. We thus focus on how Equation (8) is solved.
Optimizing Weight Matrix W To solve (8), one can use either projected gradient descent or projected stochastic gradient descent. The key difficulty in optimization lies on the projection step
PΩ(Ebudget)(Z) := arg min W∈Ω(Ebudget)
‖W − Z‖2 (10)
where Z could be W − η∇W L̄(W,M) or replacing ∇W L̄(W,M) by a stochastic gradient ∇̂W L̄(W,M). To solve the projection step, let us take a closer look at the constraint Equation (4). We rearrange the energy constraint Ω(Ebudget) into the following form with respect to W :{
W ∣∣ ∑ u∈U∪V α (u) 1 min(k, ‖W (u)‖0) + α(u)2 max(0, ‖W (u)‖0 − k) + α(u)3 ‖W (u)‖0 + α(u)4 ≤ Ebudget
} ,
(11)
whereW stacks all the variable {W (u)}u∈U∪V , and α(u)1 , α (u) 2 , α (u) 3 , α (u) 4 and k are properly defined
nonnegative constants. Note that α(u)1 ≤ α (u) 2 and k is a positive integer. Theorem 1 casts the energyconstrained projection problem to a 0/1 knapsack problem. The proof is included in the Appendix. Theorem 1. The projection problem in (10) is equivalent to the following 0/1 knapsack problem:
max ξ is binary
〈Z Z, ξ〉, s.t. 〈A, ξ〉 ≤ Ebudget − ∑
u∈U∪V α
(u) 4 , (12)
where Z stacks all the variables {Z(u)}u∈U∪V , A and ξ are of the same shape as Z, and the j-th element of A(u) for any u ∈ U ∪ V is defined by
A (u) j =
{ α
(u) 1 + α (u) 3 , if Z (u) j is among the top k elements of Z (u) in term of magnitude; α
(u) 2 + α (u) 3 , otherwise.
(13)
The optimal solution of (10) is Z ξ∗, where ξ∗ is the optimal solution to the knapsack problem (12).
The knapsack problem is NP hard. But it is possible to find approximate solution efficiently. There exists an approximate algorithm (Chan, 2018) that can find an −accurate solution in O(n log(1/ ) + −2.4) computational complexity. However, due to some special structure in our problem, there exists an algorithm that can find an an −accurate solution much faster. In the Appendix, we show that an (1 + )-approximate solution of problem (10) can be obtained in Õ(n + 1 2 ) time complexity (Õ omits logarithm), though the implementation of the algorithm is complicated. Here we propose an efficient approximate algorithm based on the “profit density.” The profit density of item j is defined as Z2j /Aj . We sort all items based on the “profit density” and iteratively select a group of largest items until the constraint boundary is reached. The detailed algorithm description is shown in the Appendix (Algorithm 2). This greedy approximation algorithm also admits nice property as shown in Theorem 2. Theorem 2. For the projection problem (10), the approximate solution W ′′ ∈ Ω(Ebudget) to the greedy approximation algorithm admits
‖W ′′−Z‖2 ≤ ‖PΩ(Ebudget)(Z)−Z‖ 2+Top‖W ′′‖0+1((Z Z) A)·min((max(A)−gcd(A)), R(W ′′)) (14) where max(A) is the maximal element of A, which is a nonnegative matrix defined in (13); Topk(·) returns the k-th largest element of ·; denotes the element-wise division. gcd(·) is the largest positive rational number that divides every argument, e.g., gcd(0, 1/3, 2/3) = 1/3. In (14), gcd(A) denotes the greatest common divisor of all elements in A2, and R(W ′′) denotes the remaining budget
R(W ′′) = ( Ebudget −
∑ u∈U∪V α (u) 4 − 〈A, supp(W ′′)〉
) .
2Here we assume A only contains rational numbers since gcd is used.
The formal proof is in the Appendix. W ′′ is the optimal projection solution to (10) if either of the following conditions holds:
1. (The remaining budget is 0.) It means that the greedy Algorithm 2 runs out of budget; 2. (The matrix A satisfies max(A) = gcd(A).) It implies that all elements in A have the same
value. In other words, the weights for all items are identical.
6 EVALUATION
The evaluations are performed on ImageNet (Deng et al., 2009), MNIST, and MS-Celeb-1M (Guo et al., 2016) datasets. For the MS-Celeb-1M, we follow the baseline setting reported in the original paper (Guo et al., 2016), which selects 500 people who have the most face images. We randomly sample 20% images as the validation set. We use both classic DNNs, including AlexNet (Krizhevsky et al., 2012) and LeNet-5 (LeCun et al., 1998), as well as recently proposed SqueezeNet (Iandola et al., 2016) and MobileNetV2 (Sandler et al., 2018).
We compare our method mainly with five state-of-art pruning methods: magnitude-based pruning (MP) (Han et al., 2015b;a), structured sparsity learning (SSL) (Wen et al., 2016), structured bayesian pruning (SBP) (Neklyudov et al., 2017), bayesian compression (BC) (Louizos et al., 2017) and energy-aware pruning (EAP) (Yang et al., 2017). Filter pruning methods (Li et al., 2016; He et al., 2017) require a sparsity ratio to be set for each layer, and these sparsity hyper-parameters will determine the energy cost of the DNN. Considering manually setting all these hyper-parameters in energy-constrained compression is not trivial, we directly compare against NetAdapt (Yang et al., 2018) which automatically searches such sparsity ratios and use filter pruning to compress DNN models. We implement an energy-constrained version of NetAdapt, which is originally designed to restrict the inference latency. Note that MobileNetv2 and SqueezeNet have special structures (e.g. residual block) that are not fully supported by NetAdapt. Thus, we show the results of NetAdapt only for AlexNet and LeNet-5.
A cc
ur ac
y D
ro p
(% )
Hyper-parameters In the experiment, we observe that knowledge distillation can improve the performance of MP and SSL, so we apply knowledge distillation to all methods including the baseline for a fair comparison. The results of removing knowledge distillation on MP and SSL are included in the Appendix. We choose the distillation weight λ = 0.5. EAP proposes an alternative way to solve the overfitting issue, so we directly use their results. For all the DNNs, we turn off the dropout layers since we find the knowledge distillation regularization will perform better. In all the experiments, we choose ∆q = 0.1|M | where |M | is the number of all mask elements. For optimizing W , we use a pre-trained dense initialization and update W by SGD with the learning rate η1 = 0.001 and weight decay 10−4. For optimizing input mask parameters M , we use the Adam optimizer (Kingma & Ba, 2014) with η2 = 0.0001 and weight decay 10−5(MNIST)/10−6(MS-Celeb-1M). To
stabilize the training process, we exponentially decay the energy budget to the target budget, and also use this trick in MP training (i.e. decaying the sparsity budget) for fair comparisons.
6.1 IMAGENET
We set an energy budget to be less than the minimal energy consumption among the three baseline methods. We use the same performance metric (i.e. top-5 test accuracy) and hardware parameters, i.e., eMAC, eDRAM, ecache, eRF, sh, sw, kW , kX , as described in the EAP paper (Yang et al., 2017). We initialize all the DNNs by a pre-trained dense model, which is also used to set up the knowledge distillation regularization. The top-5 test accuracies on the dense models are 79.1% (AlexNet), 80.5% (SqueezeNet), and 90.5% (MobileNetV2). We use batch size 128 and train all the methods with 30 epochs. For SSL and NetAdapt, we apply 20 additional epochs to achieve comparable results. We implement the projection operation PΩ(Ebudget) on GPU, and it takes < 0.2s to perform it in our experiments. The detailed wall-clock result is included in the Appendix.
Table 1 shows the top-5 test accuracy drop and energy consumption of various methods compared to the dense model. Our training framework consistently achieves a higher accuracy with a lower
energy consumption under the same energy budget. For instance on AlexNet, under a smaller energy budget (26% < 27%), our method achieves lower accuracy drop over EAP (0.5% vs. 0.8%). The advantage is also evident in SqueezeNet and MobileNetV2 that are already light-weight by design. EAP does not report data on MobileNetV2. We observe that weight sparsity is not a good proxy for energy consumption. Our method achieves lower energy consumption despite having higher density.
Figure 2 comprehensively compares our method with prior work. Solid markers represent DNNs trained from our framework under different energy budgets (x-axis). Empty markers represent DNNs produced from previous techniques. DNNs trained by our method have lower energies with higher accuracies (i.e., solid markers are closer to the bottom-left corner than empty markers). For instance on SqueezeNet, our most energy-consuming DNN still reduces energy by 23% while improves accuracy by 0.2% compared to EAP.
6.2 MNIST AND MS-CELEB-1M
0
10
20
30 0.0
1.0
1.0
MNIST and MS-Celeb-1M (Guo et al., 2016) represent datasets where inputs have regular patterns that are amenable to input masking. For instance, MS-Celeb-1M is a face image dataset and we use its aligned face images where most of the facial features are located in the center of an image. In such scenarios, training input masks lets us control the sparsity of the layer inputs and thus further reduce energy than merely pruning model parameters as in conventional methods. We do not claim that applying input mask is a general technique; rather, we demonstrate its effectiveness when applicable. We compare our method with MP and SSL using
LeNet-5 and MobileNetV2 for these two datasets, respectively. The pre-trained dense LeNet-5 has 99.3% top-1 test accuracy on MNIST, and the dense MobileNetV2 has 65.6% top-5 test accuracy on MS-Celeb-1M. EAP does not report data on these two datasets. Similar to the evaluation on ImageNet, we set the energy budget to be lower than the energy consumptions of MP and SSL. We use batch size 32 on MNIST and 128 on MS-Celeb-1M, and number of epochs is set the same as the ImageNet experiments. Table 2 compares the energy consumption and accuracy drop. Our method consistently achieves higher accuracy with lower energy under the same or even smaller energy budget. We visualize the sparsity of the learned input masks in Figure 3.
7 CONCLUSION
This paper demonstrates that it is possible to train DNNs with quantitative energy guarantees in an end-to-end fashion. The enabler is an energy model that relates the DNN inference energy to the DNN parameters. Leveraging the energy model, we augment the conventional DNN training with an energy-constrained optimization process, which minimizes the accuracy loss under the constraint of a given energy budget. Using an efficient algorithm, our training framework generates DNNs with higher accuracies under the same or lower energy budgets compared to prior art.
APPENDICES
DETAIL OF ENERGY CONSUMPTION FOR DATA ACCESS
FULLY CONNECTED LAYER
To multiply X(v) ∈ Rc and W (v) ∈ Rc×d, each nonzero element of W (v) is used once but loaded three times, once each from DRAM, cache and RF, respectively. Thus, the number of DRAM, cache, and RF accesses for weight matrix W (v) is:
NweightsDRAM = N weights cache = N weights RF = ‖W (v)‖0. (15)
Input X(v) is fed into the systolic array dd/swe times, where sw denotes the the systolic array width. Thus, the number of cache accesses for X(v) is:
N inputcache = dd/swe‖X (v)‖0. (16)
Let kX be the cache size for inputX(v). If kX is less than ‖X(v)‖0, there are ‖X(v)‖0−kX elements that must be reloaded from DRAM every time. The rest kX elements need to load from only DRAM once as they will always reside in low-level memories. Thus, there are dd/swe(‖X(v)‖0− kX) + kX DRAM accesses for X(v). In addition, the output vector of the FC layer (result of X(v)W (v)) needs to be written back to DRAM, which further incurs d DRAM accesses. Thus, The total number of DRAM accesses to retrieve X(v) is:
N inputDRAM = dd/swemax(0, ‖X (v)‖0 − kX) + min(kX , ‖X(v)‖0) + d. (17)
Each input element is loaded from RF once for each MAC operation, and there are two RF accesses incurred by accumulation for each MAC operation (one read and one write). Thus, the total number of RF accesses related to X(v) is:
N inputRF = d‖X (v)‖0 + 2‖W (v)‖0. (18)
In summary, the data access energy of a fully connected layer v is expressed as follows, in which each component follows the derivations in Equation (15) through Equation (18):
E (v) data = eDRAM(N input DRAM +N weights DRAM ) + ecache(N input cache +N weights cache ) + eRF(N input RF +N weights RF ). (19)
CONVOLUTION LAYER
Similar to a FC layer, the data access energy of a CONV layer u is modeled as:
E (u) data = eDRAM(N input DRAM +N weights DRAM ) + ecache(N input cache +N weights cache ) + eRF(N input RF +N weights RF ). (20)
The notations are the same as in FC layer. We now show how the different components are modeled.
To convolve W (u) ∈ Rd×c×r×r with X(u) ∈ Rc×h×w, each nonzero element in the weight tensor W (u) is fed into the systolic array dh′w′/she times, where sh denotes the height of the systolic array and h′ and w′ are dimension parameters of X(u). Thus,
Nweightscache = dh ′w′/she‖W (u)‖0. (21)
Similar to the FC layer, the number of RF accesses for W (u) during all the MAC operations is:
NweightsRF = h ′w′‖W (u)‖0. (22)
Let kW be the cache size for the weight matrix W (u). If ‖W (u)‖0 > kW , there are kW nonzero elements of W (u) that would be accessed from DRAM only once as they would reside in the cache, and the rest ‖W (u)‖0 − kW elements would be accessed from DRAM by dh′w′/she times. Thus,
NweightsDRAM = dh ′w′/shemax(0, ‖W (u)‖0 − kW ) + min(kW , ‖W (u)‖0). (23)
Let kX be the cache size for input X(u). If every nonzero element in X(u) is loaded from DRAM to cache only once, N inputDRAM would simply be ‖X(u)‖0. In practice, however, the cache size kX is much
smaller than ‖X(u)‖0. Therefore, some portion of X(u) would need to be re-loaded. To calculate the amount of re-loaded DRAM access, we observe that in real hardware X(u) is loaded from DRAM to the cache at a row-granularity.
When the input X(u) is dense, there are at least cw elements loaded at once. In this way, the cache would first load bkX/(cw)c rows from DRAM, and after the convolutions related to these rows have finished, the cache would load the next bkX/(cw)c rows in X(u) for further processing. The rows loaded in the above two rounds have overlaps due to the natural of the convolution operation. The number of overlaps Roverlap is dh/(bkX/cwc− r+ s)e− 1, and each overlap has cw(r− s) elements. Thus, Roverlap × cw(r − s) elements would need to be reloaded from DRAM. Finally, storing the outputs of the convolution incurs an additional dh′w′ DRAM writes. Summing the different parts together, the upper bound (X(u) is dense) number of DRAM accesses for X(u) is:
N inputDRAM = ‖X (u)‖0 + (dh/(bkX/cwc − r + s)e − 1)cw(r − s) + dh′w′. (24)
When the input X(u) is not dense, we can still count the exact number of elements in the overlaps Noverlap of the consecutive loading rounds, so we have:
N inputDRAM = ‖X (u)‖0 +Noverlap + dh′w′. (25)
Every nonzero element in the unfolded input X̄(u) would be fed into the systolic array dd/swe times (for grouped convolution, this number is divided by the number of groups). Each MAC operation introduces 2 RF accesses. Thus,
N inputcache = dd/swe‖X̄‖0, N input RF = d‖X̄ (u)‖0 + 2h′w′‖W (u)‖0. (26)
PROOF TO THEOREM 1
Proof. First, it is easy to see that (10) is equivalent to the following problem
max ξ is binary
〈Z Z, ξ〉, s.t. ξ ∈ Ω(Ebudget). (27)
Note that if the optimal solution to problem (27) is ξ̄, the solution to problem (10) can be obtained by Z ξ̄; given the solution to (10), the solution to (27) can be obtained similarly. Therefore, we only need to prove that (27) is equivalent to (12). Meeting the following two conditions guarantees that (27) and (12) are equivalent since they have identical objective functions:
1. Any optimal solution of problem (27) is in the constraint set of problem (12);
2. Any optimal solution of problem (12) is in the constraint set of problem (27).
Let us prove the first condition. Let ξ̂ be the optimal solution to (27). Then for any u ∈ U ∪ V , the elements of Z(u) selected by ξ̂(u) are the largest (in terms of magnitude) ‖ξ̂(u)‖0 elements of Z(u); otherwise there would exist at least one element that can be replaced by another element with a larger magnitude, which would increase the objective value in (27). Since ξ̂ ∈ Ω(Ebudget), according to the definition of A in (13), ξ̂ satisfies the constraint of (12).
Let us now prove the second condition. The definition of A in (13) show that there could at most be two different A(u) values for each element u, and the largest k elements in Z(u) always have the smaller value, i.e., α(u)1 + α (u) 3 . Let ξ̄ be the optimal solution to the knapsack problem (12). For any u ∈ U ∪ V , the elements selected by ξ̄(u) are also the largest elements in Z(u) in terms of magnitude; otherwise there would exist an element Z(u)j that has a larger magnitude but corresponds to a smaller A (u) j ((13) shows that A (u) i ≥ A (u) j when |Z (u) i | ≤ |Z (u) j |). This would contradict the fact that ξ̄ is optimal. In addition, ξ̄ meets the constraint in problem (12). Therefore, ξ̄ ∈ Ω(Ebudget). It completes the proof.
AN (1 + )-APPROXIMATE SOLUTION FOR PROBLEM (10)
Theorem 3. For the projection problem (10), there exists an efficient approximation algorithm that has a computational complexity of O (( n+ (|U |+|V |) 3
2
) log nmax(A)min(A+) ) and generates a solution
W ′ ∈ Ω(Ebudget) that admits
‖W ′ − Z‖2 ≤ ∥∥∥∥PΩ( Ebudget
1+O( )
)(Z)− Z ∥∥∥∥2 , (28)
where min(A+) is the minimum of the positive elements in A.
|U | and |V | denote the number of CONV and FC layers, respectively. They are very small numbers that can be treated as constants here. Thus, the computational complexity for our problem is reduced to Õ(n+ 1 2 ), where Õ omits the logarithm term. In the following, we will prove this theorem by construction.
PROBLEM FORMULATION
Definition 1. Inverted knapsack problem. Given n objects I := {(vi, wi)}ni=1 each with weight wi > 0, and value vi ≥ 0, define hI(x) to be the smallest weight budget to have the total value x:
hI(x) := min ξ∈{0,1}n n∑ i=1 wiξi (29)
s. t. n∑ i=1 viξi ≥ x
We are more interested in the case that the weights of n objects are in m clusters, i.e. there are only m distinct weights,
|{wi}ni=1| = m.
In our case, m is proportional to the number of layers in DNN, and n is the number of all the learnable weights in W , so m n. Definition 2. Inverse of step function. The inverse of the step function f is defined as the maximal x having the function value y:
f−1(y) := max f(x)≤y x (30)
Observation The inverse of the step function h−1I (y) is just the maximal value we can get given the weight budget, i.e. the original knapsack problem:
h−1I (y) = max ξ∈{0,1}n n∑ i=1 viξi, s. t. n∑ i=1 wiξi ≤ y. (31)
Observation Given a step function with l breakpoints, its inverse can be generated with O(l) time complexity, and vice versa.
Thus, given the step function of hI in (29) which has l breakpoints, we can get h−1I (i.e. the original knapsack problem) within O(l) time complexity. Definition 3. w-uniform. Step function f is w-uniform if the ranges of f is from −∞, 0, w, 2w, ..., lw.
Observation If all the objects in I have the same weight w, i.e. m = 1, then the function hI(x) is nondecreasing and w-uniform. Moreover, its breakpoints are:
(0, 0), (v1, w), (v1 + v2, 2w), ..., ( n∑ i=1 vi, nw ) ,
if the objects’ indices follows the decreasing order in terms of the values, i.e. v1 ≥ v2 ≥ ... ≥ vn. Thus we can get all possible function values of hI(x):
hI(x) = kw, ∀x ∈ ( k−1∑ i=1 vi, k∑ i=1 vi ] .
Definition 4. (min, +)-convolution. For functions f, g, the (min, +)-convolution is:
(f ⊕ g)(x) = min x′
(f(x′) + g(x− x′)).
Observation If object sets I1 ∩ I2 = ∅, then
fI1∪I2 = fI1 ⊕ fI2 .
Observation The inverse of (min, +)-convolution between w-uniform function f and w-uniform function g is the (max, +)-convolution between f−1 and g−1:
(f ⊕ g)−1(y) = max y′∈{0,1w,...,lw} (f−1(y′) + g−1(y − y′)). (32)
Lemma 4. For any f and g nonnegative step functions, given an arbitrary number b, we always have
min{f ⊕ g, b} = min{min{f, b} ⊕min{g, b}, b} (33)
Proof. Given any x, let z ∈ Arg minx′ f(x′) + g(x − x′) and z̄ ∈ Arg minx′ min(f(x′), b) + min(g(x− x′), b), so we have (f ⊕ g)(x) = f(z) + g(x− z) and (min{f, b} ⊕min{g, b})(x) = min(f(z̄), b) + min(g(x− z̄), b). Consider the following cases:
1. (f ⊕ g)(x) ≥ b. In this case, we claim that (min{f, b} ⊕min{g, b})(x) ≥ b. We prove it by contradiction. Suppose (min{f, b} ⊕min{g, b})(x) < b which implies min(f(z̄), b) + min(g(x − z̄), b) < b. Because both f and g are nonnegative, we have f(z̄) < b and g(x − z̄) < b which imply min(f(z̄), b) + min(g(x − z̄), b) = f(z̄) + g(x − z̄) < b, However, this contradicts (f ⊕ g)(x) ≥ b. Therefore, we have min((f ⊕ g)(x), b) = min((min{f, b} ⊕min{g, b})(x), b) = b.
2. (f ⊕ g)(x) < b. In this case, we have f(z) < b and g(x − z) < b, so min(f(z̄), b) + min(g(x− z̄), b) ≤ min(f(z), b)+min(g(x−z), b) = f(z)+g(x−z) = (f ⊕g)(x) < b. Since both f and g are nonnegative, we have f(z̄) < b and g(x − z̄) < b which imply min(f(z̄), b) + min(g(x − z̄), b) = f(z̄) + g(x − z̄) ≥ (f ⊕ g)(x). Therefore, we have min(f(z̄), b) + min(g(x − z̄), b) = f(z) + g(x − z) ⇔ (min{f, b} ⊕ min{g, b})(x) = (f ⊕ g)(x).
EFFICIENCY OF (MIN, +)-CONVOLUTION
Lemma 5. Let f and g be nondecreasing w-uniform functions with O(l) breakpoints, the (min, +)-convolution f ⊕ g (having O(l) breakpoints) can be generated with O(l2) time complexity.
Proof. Firstly, we compute the inverse representation of f and g, i.e. compute f−1 and g−1 from Equation (30). The inverse representation can be computed in O(l) time (proportional to the number of breakpoints). From Equation (32), we can compute the inverse of f ⊕ g. For each y ∈ {0, 1w, ..., 2lw}, function (f ⊕ g)−1(y) can be computed in O(l) time by brute force. Thus a total O(l2) is enough to get (f ⊕ g)−1 which has O(l) breakpoints. We can get f ⊕ g via (f ⊕ g)−1 by the inverse definition (30) in O(l) time.
Lemma 6. Let f and g be nondecreasing step functions with l breakpoints in total, min{f ⊕ g, b} can be approximated by a step function φb with O(l + 1 2 ) complexity and 2 b additive error, i.e. min{f ⊕ g, b} ≤ φb ≤ min{f ⊕ g, b}+ 2 b. The resultant function φb has O(1/ ) breakpoints.
Proof. We can construct ( b)-uniform functions f ′b, g ′ b which have d1/ e breakpoints:
f ′b(x) =
⌈ min(b, f(x))
b
⌉ b, g′b(x) = ⌈ min(b, g(x))
b
⌉ b.
This needsO(l) computational complexity. From Lemma 5, we can compute f ′b⊕g′b withO( 1 2 ) time complexity and φb = min{f ′b ⊕ g′b, b} has O(1/ ) breakpoints. Because f ′b and g′b are constructed by ceiling min{f, b} and min{g, b}, we have:
min{f, b} ⊕min{g, b} ≤ f ′b ⊕ g′b ≤ min{f, b} ⊕min{g, b}+ 2 b,
which implies
min{min{f, b} ⊕min{g, b}, b} ≤ min{f ′b ⊕ g′b, b} ≤ min{min{f, b} ⊕min{g, b}, b}+ 2 b.
From Lemma 4, we know that min{min{f, b}⊕min{g, b}, b} = min{f ⊕ g, b}, so it completes the proof.
Lemma 7. Let f1, f2, ..., fm be nondecreasing step functions with l breakpoints in total, min{f1 ⊕ f2 ⊕ ... ⊕ fm, b} can be approximated by a step function ψb with O(l + m/ 2) computational complexity and m b additive error. The resultant function ψb has O(1/ ) breakpoints.
Proof. From Lemma 6, we have shown the case m = 2. For general m > 2, we can construct a binary tree to approximate pairs of functions, e.g., if m = 4, we can firstly approximate ψ(1) ≈ min{f1 ⊕ f2, b}, and ψ(2) ≈ min{f3 ⊕ f4, b}, then approximate ψ(3)b ≈ min{ψ(1) ⊕ ψ(2), b}. By this way, we construct a binary tree which has O(logm) depth and O(m) nodes. In the beginning, we use ceil function to construct m new b-uniform functions:
f ′i,b(x) =
⌈ min(b, fi(x))
b
⌉ b,∀i ∈ {1, 2, ...,m}.
Then we can use the binary tree to “merge” all the m functions in pairs, via O(logm) iterations. Without loss of generality, we assume m is a power of two. We can recursively merge t functions into t/2 functions:
1. Initialize t = m, g′i,b = f ′ i,b,∀i ∈ {1, ..., t}.
2. Reassign g′i,b = min{g′2i−1,b ⊕ g′2i,b, b},∀i ∈ {1, ..., t/2}. According to Lemma 6, the number of break points of min{g′2i−1,b ⊕ g′2i,b, b} is still O(1/ ).
3. t = t/2. If t > 1, go back to Step 2.
4. Return ψb := min{g′1,b, b}.
For this binary tree, functions of the bottom leaf nodes have b additive error, and every (min, +)- convolution f ′ ⊕ g′ will accumulate the additive error from the two functions f ′ and g′. The root node of the binary tree will accumulate the additive errors from all the m leaf nodes, thus the resultant function ψb ≤ min{f1⊕ ...⊕ fm, b}+m b. For the computational complexity, initializing f ′i,b takes O(l), Step 1 takes O(l), Step 2 and 3 take O(m/ 2) (since there are O(m) nodes in the binary tree), and Step 4 takes O(m/ ). Therefore, there is O(l +m/ 2) in total.
Lemma 8. For the inverted knapsack problem defined in Equation (29), if all the n objects can be separated into m groups I1, ..., Im which have m distinct weights, there exists an approximate algorithm with computational complexity O((n+ m 3
2 ) log nmax(w) min(w) ) which can approximate hI by
h̃I : hI(x) ≤ h̃I(x) ≤ (1 +O( ))hI(x), ∀x.
Proof. Firstly, the step function hIi ,∀i ∈ {1, 2, ...,m} can be easily generated within O(n log n) by sorting the objects of each group according to their values (in descending order). From the definition of (min, +)-convolution, we know that hI = hI1 ⊕ ... ⊕ hIm . Let us construct an algorithm to approximate hI :
1. Construct a set B := {2inmax(w) ∈ [min(w), nmax(w)]; i ∈ Z≤0}, where min(w) and max(w) are the minimum and maximum weight of items respectively, and Z≤0 is the nonpositive integer set. We have |B| = O(log nmax(w)min(w) ).
2. For every b ∈ B, construct ψb to approximate min{hI1 ⊕ ...⊕ hIm , b} based on Lemma 7.
3. Construct function h̃−1I :
h̃−1I (y) = { ψ−1b (y), if b/2 < y ≤ b and y > min(B); ψ−1min(B)(y), if y ≤ min(B).
where min(B) is the minimum element in B. The resultant function h̃−1I (or h̃I ) has at most O( 1 log
nmax(w) min(w) ) breakpoints.
4. Compute the original function h̃I from h̃−1I .
According to the above procedure, for any hI(x) ∈ (b/2, b], h̃I(x) approximate hI(x) with additive error O(m b), so we have hI(x) ≤ h̃I(x) ≤ (1 + O(m ))hI(x). The algorithm takes O((n + m/ 2) log nmax(w)min(w) ), if we require the approximation factor to be 1 +O( ), i.e.,
hI(x) ≤ h̃I(x) ≤ (1 +O( ))hI(x), ∀x, we need
O ( (n+m3/ 2) log nmax(w)
min(w) ) time complexity.
Theorem 9. For the knapsack problem defined in Equation (12), if all the n objects have m distinct weights, there exists an approximate algorithm with computational complexity O((n + m3
2 ) log nmax(w) min(w) ) to generate a function h̃ −1 I satisfying:
h−1I
( y
1 +O( )
) ≤ h̃−1I (y) ≤ h −1 I (y), ∀y.
Proof. From Lemma 8, we have h̃I(x) ≤ (1 +O( ))hI(x) which implies
{x | (1 +O( ))hI(x) ≤ y} ⊆ {x | h̃I(x) ≤ y}. So
max hI(x)≤y/(1+O( )) x ≤ max h̃I(x)≤y
x⇔ h−1I (
y
1 +O( )
) ≤ h̃−1I (y).
Similarly, we can get {x | h̃I(x) ≤ y} ⊆ {x | hI(x) ≤ y} from Lemma 8, so we have
max hI(x)≤y x ≥ max h̃I(x)≤y x⇔ h−1I (y) ≥ h̃ −1 I (y).
Let I be the set of objects whose weights are nonzero elements in A and values are the corresponding elements in Z Z, i.e. I+ = {(Z2i , Ai) | ∀i ∈ {1, 2, ..., |A|} and Ai > 0}, ξ̃+ be the solution corresponding to h̃−1I (Ebudget− ∑ u∈U∪V α (u) 4 ). Let ξ̃Ic+ = 1 and ξ̃I+ = ξ̃
+, where Ic+ = {(Z2i , Ai) | ∀i ∈ {1, 2, ..., |A|} and Ai = 0} is the complement of I+. Here we have m ≤ 2|U |+ |V | distinct values in A. According to Theorem 9, we have 〈Z Z, ξ̃〉 ≥ maxξ〈Z Z, ξ〉, s.t. 〈A, ξ〉 ≤ Ebudget− ∑ u∈U∪V α (u) 4
1+O( ) , which implies
〈Z Z, ξ̃〉 ≥ max ξ∈Ω(Ebudget/(1+O( ))) 〈Z Z, ξ〉.
From Theorem 9, we can directly get Theorem 3.
Algorithm 2: Greedy Algorithm to Solve Problem (12). Input: Z,A,Ebudget, {α(u)}u∈U∪V as in (12). Result: Greedy solution ξ̃ for problem (12).
1 Initialize b = 0, ξ = 0. 2 Generate the profit density δ:
δj =
{ (Zj)
2/Aj , if Aj > 0; ∞, if Aj = 0.
3 Sort δ, let I be the indices list of the sorted δ (in descending order). 4 foreach index j ∈ I do 5 b = b+Aj ; 6 If b > Ebudget − ∑ u∈U∪V α (u) 4 , exit loop; 7 ξj = 1; 8 end 9 ξ̃ = ξ.
PROOF TO THEOREM 2
Proof. From Theorem 1, we know the original projection problem (10) is equivalent to the knapsack problem (12). So proving the inequality (14) is equivalent to proving
〈Z Z, ξ̃〉 ≥ 〈Z Z, ξ∗〉 − Top‖ξ̃‖0+1((Z Z) A) ·R(ξ̃) (34) and
〈Z Z, ξ̃〉 ≥ 〈Z Z, ξ∗〉 − Top‖ξ̃‖0+1((Z Z) A) · (max(A)− gcd(A)), (35)
where ξ̃ is the greedy solution of knapsack problem corresponding to W ′′, and ξ∗ is the exact solution of knapsack problem corresponding to PΩ(Ebudget)(Z), i.e.,
W ′′ = Z ξ̃, PΩ(Ebudget)(Z) = Z ξ ∗.
Firstly, let us prove the inequality (35). If we relax the values of ξ to be in the range [0, 1] instead of {0, 1}, the discrete constraint is removed so that the constraint set becomes
∆ = { ξ | 0 ≤ ξ ≤ 1 and 〈A, ξ〉 ≤ Ebudget −
∑ u∈U∪V α (u) 4
} .
So the 0/1 knapsack problem is relaxed as a linear programming. This relaxed problem is called fractional knapsack problem, and there is a greedy algorithm (Dantzig, 1957) which can exactly solve the fractional knapsack problem. Slightly different from our Algorithm 2, the greedy algorithm for the fractional knapsack can select a fraction of the item, so its remaining budget is always zero. The optimal objective value of the fractional knapsack is
max ξ∈∆ 〈Z Z, ξ〉 = 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) ·R(ξ̃).
Since the constraint set of the fractional knapsack problem is a superset of the constraint of the original knapsack problem, we have 〈Z Z, ξ∗〉 ≤ max0≤ξ≤1〈Z Z, ξ〉, that leads to inequality (34). Secondly, we show that the inequality (35) is also true. Since all the coefficients in A are multiples of gcd(A), we can relax the original 0/1 knapsack problem in this way: for each item, split them to several items whose coefficients in the constraint are gcd(A), and the coefficients in the objective function are split equally. For the j-th item, the coefficient in the constraint is Aj and the coefficient in the objective function is (Z Z)j . It will be split into Aj/ gcd(A) items, and the j-th item is associated with coefficient (Z2j /Aj) · gcd(A) in the objective function. This relaxation gives us a new 0/1 knapsack problem, where all the items have the same coefficient in the constraint, so the optimal solution is just selecting the ones with the largest coefficients in the objective function. We can formulate this problem as a relaxed knapsack problem by replacing the constraint of ξ into ξ ∈ Γ, where
Γ = { ξ | for all j, ξj is a multiple of gcd(A)
Aj , 0 ≤ ξj ≤ 1, and 〈A, ξ〉 ≤ Ebudget − ∑ u∈U∪V α (u) 4
} .
All the elements of the solution are either 0 or 1 except the last picked one which corresponds to Top‖ξ̃‖0+1((Z Z) A). Let the (‖ξ̃‖0 + 1)-th largest element in (Z Z) A be indexed by t. We have 0 ≤ ξ̃t ≤ 1− gcd(A)/At. Therefore, comparing with the original 0/1 knapsack problem, we have
max ξ∈Γ 〈Z Z, ξ〉 ≤ 〈Z Z, ξ̃〉+ (Z Z)t · (1− gcd(A)/At)
= 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) ·At · (1− gcd(A)/At)
= 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) · (At − gcd(A))
≤ 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) · (max(A)− gcd(A))
Since {ξ | ξ is binary} ⊆ Γ, we have 〈Z Z, ξ∗〉 ≤ maxξ∈Γ〈Z Z, ξ〉. So we have the inequality (35).
SUPPLEMENTARY EXPERIMENT RESULTS
RESULTS OF BASELINE WITHOUT KNOWLEDGE DISTILLATION
Table 3 shows the energy and accuracy drop results of the baseline methods MP and SSL when the knowledge distillation is removed from their loss function. By using knowledge distillation, the results in Table 1 are much better. Therefore, we use knowledge distillation in all the experiments when it is applicable.
ENERGY-CONSTRAINED PROJECTION EFFICIENCY
The projection operation PΩ(Ebudget) in Algorithm 1 can be implemented on GPU. We measured its wall-clock time on a GPU server (CPU: Xeon E3 1231-v3, GPU: GTX 1080 Ti), and the result is shown in Table 4 (the time is averaged over 100 iterations). | 1. What is the main contribution of the paper regarding neural network training under energy constraints?
2. What are the strengths and weaknesses of the proposed method compared to previous works?
3. How does the activation sparsity parameter affect the accuracy-energy consumption tradeoff, and how does it compare to other methods in terms of weight and activation sparsity?
4. How does the proposed method compare to filter pruning methods in terms of efficiency and gains under the proposed energy consumption model?
5. How does the guarantee of the weight projection problem translate into wall-clock time?
6. Is weight sparsity a good proxy for energy consumption in the experiments on Imagenet, and how do the benefits of the activation mask compose in these experiments?
7. Can knowledge distillation be helpful when constraining neural network weights to be quantized and/or sparse, and how does it relate to the proposed method? | Review | Review
The paper proposes a method for neural network training under a hard energy constraint (i.e. the method guarantees the energy consumption to be upper bounded). Based on a systolic array hardware architecture the authors model the energy consumption of transferring the weights and activations into different levels of memory (DRAM, Cache, register file) during inference. The energy consumption is therefore determined by the number of nonzero elements in the weight and activation tensors. To minimize the network loss under an energy constraint, the authors develop a training framework including a novel greedy algorithm to compute the projection of the weight tensors to the energy constraint.
Pros:
The proposed method allows to accurately impose an energy constraint (in terms of the proposed model), in contrast to previous methods, and also yields a higher accuracy than these on some data sets. The proposed solution seems sound (although I did not check the proofs in detail, and I am not very familiar with hardware energy consumption subtleties).
Questions:
The experiments in Sec. 6.2 suggest that the activation mask is mainly beneficial when the data is highly structured. How are the benefits (in terms of weight and activation sparsity) composed in the experiments on Imagenet? How does the weight sparsity of the the proposed method compare to the related methods in these experiments? Is weight sparsity in these cases a good proxy for energy consumption?
How does the activation sparsity (decay) parameter (\delta) q affect the accuracy-energy consumption tradeoff for the two data sets?
The authors show that the weight projection problem can be solved efficiently. How does the guarantee translate into wall-clock time?
Filter pruning methods [1,2] reduce both the size of the weight and activation tensors, while not requiring to solve a complicated projection problem or introducing activation masks. It would be good to compare to these methods, or at least comment on the gains to be expected under the proposed energy consumption model.
Knowledge distillation has previously been observed to be quite helpful when constraining neural network weights to be quantized and/or sparse, see [3,4,5]. It might be worth mentioning this.
Minor comments:
- Sec. 3.4. 1st paragraph: subscript -> superscript
- Sec. 6.2 first paragraph: pattens -> patterns, aliened -> aligned
[1] He, Y., Zhang, X., & Sun, J. (2017). Channel pruning for accelerating very deep neural networks. ICCV 2017.
[2] Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. Pruning filters for efficient convnets. ICLR 2017.
[3] Mishra, A., & Marr, D. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. ICLR 2018.
[4] Tschannen, M., Khanna, A., & Anandkumar, A. StrassenNets: Deep learning with a multiplication budget. ICML 2018.
[5] Zhuang, B., Shen, C., Tan, M., Liu, L., & Reid, I. Towards effective low-bitwidth convolutional neural networks. CVPR 2018. |
ICLR | Title
Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking
Abstract
Deep Neural Networks (DNNs) are increasingly deployed in highly energyconstrained environments such as autonomous drones and wearable devices while at the same time must operate in real-time. Therefore, reducing the energy consumption has become a major design consideration in DNN training. This paper proposes the first end-to-end DNN training framework that provides quantitative energy consumption guarantees via weighted sparse projection and input masking. The key idea is to formulate the DNN training as an optimization problem in which the energy budget imposes a previously unconsidered optimization constraint. We integrate the quantitative DNN energy estimation into the DNN training process to assist the constrained optimization. We prove that an approximate algorithm can be used to efficiently solve the optimization problem. Compared to the best prior energy-saving methods, our framework trains DNNs that provide higher accuracies under same or lower energy budgets.
1 INTRODUCTION
Deep Neural Networks (DNNs) have become the fundamental building blocks of many emerging application domains such as computer vision (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), speech recognition (Hinton et al., 2012), and natural language processing (Goldberg, 2016). Many of these applications have to operate in highly energy-constrained environments. For instance, autonomous drones have to continuously perform computer vision tasks (e.g., object detection) without a constant power supply. Designing DNNs that can meet severe energy budgets has increasingly become a major design objective.
The state-of-the-art model compression algorithms adopt indirect techniques to restrict the energy consumption, such as pruning (or sparsification) (He et al., 2018; Han et al., 2015a; Liu et al., 2015; Zhou et al., 2016; Li et al., 2016; Wen et al., 2016) and quantization (Gong et al., 2014; Wu et al., 2016; Han et al., 2015a; Courbariaux et al., 2015; Rastegari et al., 2016). These techniques are agonistic to energy consumption; rather they are designed to reduce the amount of computations and the amount of model parameters in a DNN, which do not truly reflect the energy consumption of a DNN. As a result, these indirect approaches only indirectly reduce the total energy consumption. Recently, Energy-Aware Pruning (EAP) (Yang et al., 2017) proposes a more direct manner to reduce the energy consumption of DNN inferences by guiding weight pruning using DNN energy estimation, which achieves higher energy savings compared to the indirect techniques.
However, a fundamental limitation of all existing methods is that they do not provide quantitative energy guarantees, i.e., ensuring that the energy consumption is below a user-specified energy budget. In this paper, we aspire to answer the following key question: how to design DNN models that satisfy a given energy budget while maximizing the accuracy? This work provides a solution to this question through an end-to-end training framework. By end-to-end, we refer to an approach that directly meets the energy budget without relying heuristics such as selectively restoring pruned weights and layer by layer fine-tuning (Han et al., 2015b; Yang et al., 2017). These heuristics are effective in practice but also have many hyper-parameters that must be carefully tuned.
Our learning algorithm directly trains a DNN model that meets a given energy budget while maximiz-
ing model accuracy without incremental hyper-parameter tuning. The key idea is to formulate the DNN training process as an optimization problem in which the energy budget imposes a previously unconsidered optimization constraint. We integrate the quantitative DNN energy estimation into the DNN training process to assist the constrained optimization. In this way, a DNN model, once is trained, by design meets the energy budget while maximizing the accuracy.
Without losing generality, we model the DNN energy consumption after the popular systolic array hardware architecture (Kung, 1982) that is increasingly adopted in today’s DNN hardware chips such as Google’s Tensor Processing Unit (TPU) (Jouppi et al., 2017), NVidia’s Tensor Cores, and ARM’s ML Processor. The systolic array architecture embodies key design principles of DNN hardware that is already available in today’s consumer devices. We specifically focus on pruning, i.e., controlling the DNN sparsity, as the main energy reduction technique. Overall, the energy model models the DNN inference energy as a function of the sparsity of the layer parameters and the layer input.
Given the DNN energy estimation, we formulate DNN training as an optimization problem that minimizes the accuracy loss under the constraint of a certain energy budget. The key difference between our optimization formulation and the formulation in a conventional DNN training is two-fold. First, our optimization problem considers the energy constraint, which is not present in conventional training. Second, layer inputs are non-trainable parameters in conventional DNN training since they depend on the initial network input. We introduce a new concept, called input mask, that enables the input sparsity to be controlled by a trainable parameter, and thus increases the energy reduction opportunities. This lets us further reduce energy in scenarios with known input data pattern.
We propose an iterative algorithm to solve the above optimization problem. A key step in optimization is the projection operation onto the energy constraint, i.e., finding a model which is closest to the given (dense) model and satisfies the energy constraint. We prove that this projection can be casted into a 0/1 knapsack problem and show that it can be solved very efficiently. Evaluation results show that our proposed training framework can achieve higher accuracy under the same or lower energy compared to the state-of-the-art energy-saving methods.
In summary, we make the following contributions in this paper:
• To the best of our knowledge, this is the first end-to-end DNN training framework that provides quantitative energy guarantees;
• We propose a quantitative model to estimate the energy consumption of DNN inference on TPU-like hardware. The model can be extended to model other forms of DNN hardware;
• We formulate a new optimization problem for energy-constrained DNN training and present a general optimization algorithm that solves the problem.
2 RELATED WORK
Energy-Agnostic Optimizations Most existing DNN optimizations indirectly optimize DNN energy through reducing the model complexity. They are agonistic to the energy consumption, and therefore cannot provide any quantitative energy guarantees.
Pruning, otherwise known as sparsification, is perhaps the most widely used technique to reduce DNN model complexity by reducing computation as well as hardware memory access. It is based on the intuition that DNN model parameters that have low-magnitude have little impact on the final prediction, and thus can be zeroed-out. The classic magnitude-based pruning (Han et al., 2015b) removes weights whose magnitudes are lower than a threshold. Subsequent work guides pruning using special structures (Liu et al., 2015; Zhou et al., 2016; Li et al., 2016; Wen et al., 2016; He et al., 2017), such as removing an entire channel, to better retain accuracy after pruning.
Quantization reduces the number of bits used to encode model parameters, and thus reducing computation energy and data access energy (Gong et al., 2014; Wu et al., 2016; Han et al., 2015a). The extreme case of quantization is using 1-bit to represent model parameters (Courbariaux et al., 2015; Rastegari et al., 2016). Such binary quantization methods are usually trained from scratch instead of quantizing a pre-trained DNN.
Energy-Aware Optimizations Recently, energy-aware pruning (EAP) (Yang et al., 2017) proposes to use a quantitative energy model to guide model pruning. Different from pure magnitude-based pruning methods, EAP selectively prunes the DNN layer that contributes the most to the total energy consumption. It then applies a sequence of fine-tuning techniques to retain model accuracy. The pruning step and fine-tuning step are alternated until the accuracy loss exceeds a given threshold.
Although EAP a promising first-step toward energy-aware optimizations, its key limitation is that it does not provide quantitative energy guarantees because it does not explicitly consider energy budget as a constraint. Our work integrates the energy budget as an optimization constraint in model training.
Latency-Guaranteed Compression Lately, model compression research has started providing guarantees in execution (inference) latency, which theoretically could be extended to providing energy guarantees as well. However, these methods are primarily search-based through either reinforcement learning (He et al., 2018) or greedy-search (Yang et al., 2018). They search the sparsity setting for every single layer to meet the given budget. Thus, they may require a large number of trials to achieve a good performance, and may not ensure that the resulting model accuracy is maximized.
3 MODELING DNN INFERENCE ENERGY CONSUMPTION
This section introduces the model of estimating energy consumption of a single DNN inference. We consider the widely-used feed-forward DNNs. Note that our proposed methodology can be easily extended to other network architectures as well. In this section, we first provide an overview of our energy modeling methodology (Section 3.1). We then present the detailed per-layer energy modeling (Section 3.2 and Section 3.3), which allow us to then derive the overall DNN energy consumption (Section 3.4). Our energy modeling results are validated against the industry-strength DNN hardware simulator ScaleSim (Samajdar et al., 2018).
DNN model sparsity (via pruning) is well recognized to significantly affect the execution efficiency and thus affect the energy consumption of a DNN model (He et al., 2018; Yang et al., 2017; Han et al., 2015a; Liu et al., 2015; Zhou et al., 2016). We thus use pruning as the mechanism to reduce energy consumption1. Note, however, that model sparsity is not the end goal of our paper; rather we focus on reducing the energy consumption directly. Many dedicated DNN hardware chips (a.k.a., Neural Processing Units, NPUs) (Jouppi et al., 2017; Chen et al., 2016; Han et al., 2016; Parashar et al., 2017) have been developed to directly benefit from model sparsity, and are already widely available in today’s consumer devices such as Apple iPhoneX, Huawei Mate 10, and Microsoft HoloLens. Our paper focuses on this general class of popular, widely-used DNN chips.
3.1 ENERGY MODELING OVERVIEW
A DNN typically consists of a sequence of convolution (CONV) layers and fully connected (FC) layers interleaved with a few other layer types such as Rectified Linear Unit (ReLU) and batch normalization. We focus mainly on modeling the energy consumption of the CONV and FC layers. This is because CONV and FC layers comprise more than 90% of the total execution time during a DNN inference (Chen et al., 2016) and are the major energy consumers (Han et al., 2015a; Yang et al., 2017). Energy consumed by other layer types is insignificant and can be taken away from the energy budget as a constant factor.
A DNN inference’s energy consumption is tied to the underlying hardware that performs the inference. In particular, we assume a systolic-array-based DNN hardware architecture. Systolic array (Kung, 1982) has long been know as an effective approach for matrix multiplication. Many DNN hardware architectures adopt the systolic array, most notably the Google Tensor Processing Unit (TPU) (Jouppi
1Quantization is another useful mechanism to reduce energy consumption. It is orthogonal to the pruning mechanism and they could be combined. This paper specifically focuses on the pruning mechanism.
et al., 2017), Nvidia’s Tensor Cores in their most recent Volta GPUs, and ARM’s ML Processor. Targeting systolic-array-based DNN hardware ensures that our approach has a wide applicability. However, our modeling and training strategies can generally be applied to other DNN architectures.
Figure 1 shows the overall hardware architecture. The systolic array comprises of several compute units that perform the Multiply-and-Accumulate (MAC) operation, which conducts the following computation: a← a+ (b× c), where b and c are the two scalar inputs and a is the scalar intermediate result called “partial sum.” MAC operation is the building block for matrix multiplication. The MAC units are organized in a 2-D fashion. The data is fed from the edges, both horizontally and vertically, which then propagate to the MAC units within the same row and columns.
We decompose the energy cost into two parts: computation energy Ecomp and data access energy Edata. Ecomp denotes the energy consumed by computation units, and Edata denotes the energy consumed when accessing data from the hardware memory. Since we mainly use pruning as the energy reduction technique, we now model how Ecomp and Edata are affected by DNN sparsity.
3.2 ENERGY CONSUMPTION FOR COMPUTATION
CONV layers perform convolution and FC layer perform matrix-vector multiplication. Both operations can be generalized to matrix-matrix multiplication, which involves only the MAC operation (Chetlur et al., 2014; Jouppi et al., 2017). Figure 1 illustrates how a matrix-matrix multiplication is carried out on the systolic array hardware. Given X and W , the systolic array computes XW by passing each row of X to each row in the systolic array and passing each column of W to each column in the systolic array. If the width of the systolic array, denoted by sw, is less than the width of W , the hardware will fold W column-wise in strides of sw. Similarly, if the height of X is greater than the height of the systolic array (sh), X is folded row-size in strides of sh. Figure 1 illustrates a 2× 2 systolic array multiplying two 4× 4 matrices. Both matrices are folded twice in strides of 2. Critically, if either inputs of a MAC operation is zero, we can skip the MAC operation entirely and thus save the computation energy. At a high-level, the total computation energy, Ecomp, can be modeled as eMACNMAC, where eMAC denotes the energy consumption of one MAC operation whereas NMAC denotes the total number of MAC operations that are actually performed. The challenge is to identify NMAC for CONV and FC layers, which we discuss below.
Fully connected layer LetX(v) ∈ R1×c be the input vector andW (v) ∈ Rc×d be the weight matrix of the FC layer v. The FC layer performs matrix-vector multiplication X(v)W (v). The number of MAC operations NMAC is sum(supp(X)supp(W )), where supp(T ) returns a binary tensor indicating the nonzero positions of tensor T . So the computation energy for a fully connected layer v:
E(v)comp = eMACsum(supp(X (v))supp(W (v))) ≤ eMAC‖W (v)‖0, (1)
where the equality is reached when the input is dense.
Convolution layer The CONV layer performs the convolution operation between a 4-D weight (also referred to as kernel or filter) tensor and a 3-D input tensor. Let W (u) ∈ Rd×c×r×r be the weight tensor, where d, c, and r are tensor dimension parameters. Let X(u) ∈ Rc×h×w be the input tensor, where h and w are the input height and width. The convolution operation in the CONV layer u generates a 3-dimensional tensor:
(X(u) ∗W (u))j,y,x = c∑ i=1 r−1∑ r′,r′′=0 X (u) i,y+r′,x+r′′W (u) j,i,r′,r′′ , (2)
where x, y indicate the position of the output tensor, which has height h′ = b(h+ 2p− r)/sc+ 1 and width w′ = b(w + 2p− r)/sc+ 1 (p is the convolution padding and s is the convolution stride). Tensor convolution (2) can be seen as a special matrix-matrix multiplication (Chellapilla et al., 2006; Chetlur et al., 2014). Specifically, we would unfold the tensor X(u) to a matrix X̄(u) ∈ Rh′w′×cr2 , and unfold the tensor W (u) to a matrix W̄ (u) ∈ Rcr2×d. X̄(u) and W̄ (u) are then multiplied together in the systolic array to compute the equivalent convolution result between X(u) and W (u).
Nonzero elements inX(u) andW (u) incur actual MAC operations. Thus,NMAC = sum(supp(X(u))∗ supp(W (u))) ≤ h′w′‖W (u)‖0 (the equality means the input is dense), resulting in the following computation energy of a CONV layer u:
E(u)comp = eMACsum(supp(X (u)) ∗ supp(W (u))) ≤ eMACh′w′‖W (u)‖0. (3)
3.3 ENERGY CONSUMPTION FOR DATA ACCESS
Accessing data happens in every layer. The challenge in modeling the data access energy is that modern hardware is equipped with a multi-level memory hierarchy in order to improve speed and save energy (Hennessy & Patterson, 2011). Specifically, the data is originally stored in a large memory, which is slow and energy-hungry. When the data is needed to perform certain computation, the hardware will load it from the large memory into a smaller memory that is faster and consume less energy. If the data is reused often, it will mostly live in the small memory. Thus, such a multi-level memory hierarchy saves overall energy and improves overall speed by exploiting data reuse.
Without losing generality, we model a common, three-level memory hierarchy composed of a Dynamic Random Access Memory (DRAM), a Cache, and a Register File (RF). The cache is split into two halves: one for holding X (i.e., the feature map in a CONV layer and the feature vector in a FC layer) and the other for holding W (i.e., the convolution kernel in a CONV layer and the weight matrix in an FC layer). This is by far the most common memory hierarchy in DNN hardware such as Google’s TPU (Jouppi et al., 2017; Chen et al., 2016; Zhu et al., 2018; Han et al., 2016). Data is always loaded from DRAM into cache, and then from cache to RFs.
In many today’s DNN hardwares, the activations and weights are compressed in the dense form, and thus only non-zero values will be accessed. This is done in prior works (Chen et al., 2016; Parashar et al., 2017). Therefore, if the value of the data that is being loaded is zero, the hardware can skip the data access and thereby save energy. There is a negligible amount of overhead to “unpack” and “pack” compressed data, which we simply take away from the energy budget as a constant factor. This is also the same modeling assumption used by Energy-Aware Pruning (Yang et al., 2017).
To compute Edata, we must calculate the number of data accesses at each memory level, i.e., NDRAM, Ncache, NRF. Let the unit energy costs of different memory hierarchies be eDRAM, ecache, and eRF, respectively, the total data access energy consumptionEdata will be eDRAMNDRAM +ecacheNcache + eRFNRF. We count the number of data accesses for both the weights and input, then combine them together. The detailed derivation of data access energy is included in the Appendix.
3.4 THE OVERALL ENERGY ESTIMATION FORMULATION
Let U and V be the sets of convolutional layers and fully connected layers in a DNN respectively. The superscript (u) and (v) indicate the energy consumption of layer u ∈ U and v ∈ V , respectively. Then the overall energy consumption of a DNN inference can be modeled by
E(X,W ) := ∑ u∈U (E(u)comp + E (u) data) + ∑ v∈V (E(v)comp + E (v) data), (4)
where X stacks input vectors/tensors at all layers and W stacks weight matrices/tensors at all layers.
4 ENERGY-CONSTRAINED DNN MODEL
Given the energy model presented in Section 3, we propose a new energy-constrained DNN model that bounds the energy consumption of a DNN’s inference. Different from prior work on model pruning in which energy reduction is a byproduct of model sparsity, our goal is to directly bound the energy consumption of a DNN while sparsity is just used as a means to reduce energy.
This section formulates training an energy-constrained DNN as an optimization problem. We first formulate the optimization constraint by introducing a trainable mask variable into the energy modeling to enforce layer input sparsity. We then define a new loss function by introducing the knowledge distillation regularizer that helps improve training convergence and reduce overfitting.
Controlling Input Sparsity Using Input Mask The objective of training an energy-constrained DNN is to minimize the accuracy loss while ensuring that the DNN inference energy is below a given budget, Ebudget. Since the total energy consumption is a function of ‖X(u)‖0 and ‖W (u)‖0, it is natural to think that the trainable parameters are X and W . In reality, however, X depends on the input to the DNN (e.g., an input image to an object recognition DNN), and thus is unknown during training time. Therefore, in conventional DNN training frameworks X is never trainable.
To include the sparsity of X in our training framework, we introduce a trainable binary mask M that is of the same shape of X , and is multiplied with X before X is fed into CONV or FC layers, or equivalently, at the end of the previous layer. For example, if the input to a standard CONV layer is X(u), the input would now be X(u) M (u), where denotes the element-wise multiplication. In practice, we do not really do this multiplication but only read X(u) on the nonzero positions of M (u).
Algorithm 1: Energy-Constrained DNN Training. Input: Energy budget Ebudget, learning rates η1, η2, mask sparsity decay step ∆q. Result: DNN weights W ∗, input mask M∗.
1 Initialize W = Wdense,M = 1, q = ‖M‖0 −∆q; 2 while True do
// Update DNN weights 3 while W has not converged do 4 W = W − η1∇̂W L̄(M,W ) ; // SGD step 5 W = PΩ(Ebudget)(W ) ; // Energy constraint projection for weights W 6 end 7 If previous_accuracy > current_accuracy, exit loop with previous W and M ;
// Update input mask 8 while M has not converged do 9 M = M − η2∇̂M L̄(M,W ) ; // SGD step
10 Clamp values of M into [0, 1]: assign 1 (or 0) to the values if they exceeds 1 (or negative); 11 M = P‖M‖0≤q(M) ; // L0 constraint projection for input mask M 12 end 13 Round values of M into {0, 1}; 14 Decay the sparsity constraint q = q −∆q; 15 end 16 W ∗ = W,M∗ = M .
With the trainable mask M , we can ensure that ‖X(u) M (u)‖0 ≤ ‖M (u)‖0, and thereby bound the sparsity of the input at training time. In this way, the optimization constraint during training becomes E(M,W ) ≤ Ebudget, where E(M,W ) denotes the total DNN inference energy consumption, which is a function of X and W (as shown in Equation (4)), and thus a function of M and W .
Knowledge Distillation as a Regularizer Directly optimizing over the constraint would likely lead to a local optimum because the energy model is highly non-convex. Recent works (Mishra & Marr, 2017; Tschannen et al., 2017; Zhuang et al., 2018) notice that knowledge distillation is helpful in training compact DNN models. To improve the training performance, we apply the knowledge distillation loss (Ba & Caruana, 2014) as a regularization to the conventional loss function. Intuitively, the regularization uses a pre-trained dense model to guide the training of a sparse model. Specifically, our regularized loss function is:
L̄λ,Wdense(M,W ) := (1− λ)L(M,W ) + λEX [‖φ(X;W )− φ(X;Wdense)‖2/|φ(·;W )|], (5) where Wdense is the original dense model, and L(M,W ) is the original loss, e.g., cross-entropy loss for classification task. φ(X;W ) is the network’s output (we use the output before the last activation layer as in Ba & Caruana (2014)), |φ(·;W )| is the network output dimensionality and 0 ≤ λ ≤ 1 is a hyper parameter similar to other standard regularizations.
Thus, training an energy-constrained DNN model is formulated as an optimization problem:
min M,W L̄λ,Wdense(M,W ) s.t. E(M,W ) ≤ Ebudget. (6)
5 OPTIMIZATION
This section introduces an algorithm to solve the optimization problem formulated in (6). The overall algorithm is shown in Algorithm 1. Specifically, the algorithm includes three key parts:
• Initialization by training a dense model. That is, Wdense := arg min
W L(M,W ) (7)
• Fix M and optimize W via approximately solving (using Wdense initialization): min W L̄(M,W ) s.t. E(M,W ) ≤ Ebudget (8)
• Fix W and optimize M by approximately solving : min M L̄(M,W ) s.t. ‖M‖0 ≤ q,M ∈ [0,1] (9)
After the initialization step (Line 1 in Algorithm 1), the training algorithm iteratively alternates between the second (Line 3-6 in Algorithm 1) and the third step (Line 8-13 in Algorithm 1) while gradually reducing the sparsity constraint q (Line 14 in Algorithm 1) until the training accuracy converges. Note that Equation (7) is the classic DNN training process, and solving Equation (9) involves only the well-known L0 norm projection P‖M‖0≤q(Q) := arg min‖M‖0≤q ‖M −Q‖
2. We thus focus on how Equation (8) is solved.
Optimizing Weight Matrix W To solve (8), one can use either projected gradient descent or projected stochastic gradient descent. The key difficulty in optimization lies on the projection step
PΩ(Ebudget)(Z) := arg min W∈Ω(Ebudget)
‖W − Z‖2 (10)
where Z could be W − η∇W L̄(W,M) or replacing ∇W L̄(W,M) by a stochastic gradient ∇̂W L̄(W,M). To solve the projection step, let us take a closer look at the constraint Equation (4). We rearrange the energy constraint Ω(Ebudget) into the following form with respect to W :{
W ∣∣ ∑ u∈U∪V α (u) 1 min(k, ‖W (u)‖0) + α(u)2 max(0, ‖W (u)‖0 − k) + α(u)3 ‖W (u)‖0 + α(u)4 ≤ Ebudget
} ,
(11)
whereW stacks all the variable {W (u)}u∈U∪V , and α(u)1 , α (u) 2 , α (u) 3 , α (u) 4 and k are properly defined
nonnegative constants. Note that α(u)1 ≤ α (u) 2 and k is a positive integer. Theorem 1 casts the energyconstrained projection problem to a 0/1 knapsack problem. The proof is included in the Appendix. Theorem 1. The projection problem in (10) is equivalent to the following 0/1 knapsack problem:
max ξ is binary
〈Z Z, ξ〉, s.t. 〈A, ξ〉 ≤ Ebudget − ∑
u∈U∪V α
(u) 4 , (12)
where Z stacks all the variables {Z(u)}u∈U∪V , A and ξ are of the same shape as Z, and the j-th element of A(u) for any u ∈ U ∪ V is defined by
A (u) j =
{ α
(u) 1 + α (u) 3 , if Z (u) j is among the top k elements of Z (u) in term of magnitude; α
(u) 2 + α (u) 3 , otherwise.
(13)
The optimal solution of (10) is Z ξ∗, where ξ∗ is the optimal solution to the knapsack problem (12).
The knapsack problem is NP hard. But it is possible to find approximate solution efficiently. There exists an approximate algorithm (Chan, 2018) that can find an −accurate solution in O(n log(1/ ) + −2.4) computational complexity. However, due to some special structure in our problem, there exists an algorithm that can find an an −accurate solution much faster. In the Appendix, we show that an (1 + )-approximate solution of problem (10) can be obtained in Õ(n + 1 2 ) time complexity (Õ omits logarithm), though the implementation of the algorithm is complicated. Here we propose an efficient approximate algorithm based on the “profit density.” The profit density of item j is defined as Z2j /Aj . We sort all items based on the “profit density” and iteratively select a group of largest items until the constraint boundary is reached. The detailed algorithm description is shown in the Appendix (Algorithm 2). This greedy approximation algorithm also admits nice property as shown in Theorem 2. Theorem 2. For the projection problem (10), the approximate solution W ′′ ∈ Ω(Ebudget) to the greedy approximation algorithm admits
‖W ′′−Z‖2 ≤ ‖PΩ(Ebudget)(Z)−Z‖ 2+Top‖W ′′‖0+1((Z Z) A)·min((max(A)−gcd(A)), R(W ′′)) (14) where max(A) is the maximal element of A, which is a nonnegative matrix defined in (13); Topk(·) returns the k-th largest element of ·; denotes the element-wise division. gcd(·) is the largest positive rational number that divides every argument, e.g., gcd(0, 1/3, 2/3) = 1/3. In (14), gcd(A) denotes the greatest common divisor of all elements in A2, and R(W ′′) denotes the remaining budget
R(W ′′) = ( Ebudget −
∑ u∈U∪V α (u) 4 − 〈A, supp(W ′′)〉
) .
2Here we assume A only contains rational numbers since gcd is used.
The formal proof is in the Appendix. W ′′ is the optimal projection solution to (10) if either of the following conditions holds:
1. (The remaining budget is 0.) It means that the greedy Algorithm 2 runs out of budget; 2. (The matrix A satisfies max(A) = gcd(A).) It implies that all elements in A have the same
value. In other words, the weights for all items are identical.
6 EVALUATION
The evaluations are performed on ImageNet (Deng et al., 2009), MNIST, and MS-Celeb-1M (Guo et al., 2016) datasets. For the MS-Celeb-1M, we follow the baseline setting reported in the original paper (Guo et al., 2016), which selects 500 people who have the most face images. We randomly sample 20% images as the validation set. We use both classic DNNs, including AlexNet (Krizhevsky et al., 2012) and LeNet-5 (LeCun et al., 1998), as well as recently proposed SqueezeNet (Iandola et al., 2016) and MobileNetV2 (Sandler et al., 2018).
We compare our method mainly with five state-of-art pruning methods: magnitude-based pruning (MP) (Han et al., 2015b;a), structured sparsity learning (SSL) (Wen et al., 2016), structured bayesian pruning (SBP) (Neklyudov et al., 2017), bayesian compression (BC) (Louizos et al., 2017) and energy-aware pruning (EAP) (Yang et al., 2017). Filter pruning methods (Li et al., 2016; He et al., 2017) require a sparsity ratio to be set for each layer, and these sparsity hyper-parameters will determine the energy cost of the DNN. Considering manually setting all these hyper-parameters in energy-constrained compression is not trivial, we directly compare against NetAdapt (Yang et al., 2018) which automatically searches such sparsity ratios and use filter pruning to compress DNN models. We implement an energy-constrained version of NetAdapt, which is originally designed to restrict the inference latency. Note that MobileNetv2 and SqueezeNet have special structures (e.g. residual block) that are not fully supported by NetAdapt. Thus, we show the results of NetAdapt only for AlexNet and LeNet-5.
A cc
ur ac
y D
ro p
(% )
Hyper-parameters In the experiment, we observe that knowledge distillation can improve the performance of MP and SSL, so we apply knowledge distillation to all methods including the baseline for a fair comparison. The results of removing knowledge distillation on MP and SSL are included in the Appendix. We choose the distillation weight λ = 0.5. EAP proposes an alternative way to solve the overfitting issue, so we directly use their results. For all the DNNs, we turn off the dropout layers since we find the knowledge distillation regularization will perform better. In all the experiments, we choose ∆q = 0.1|M | where |M | is the number of all mask elements. For optimizing W , we use a pre-trained dense initialization and update W by SGD with the learning rate η1 = 0.001 and weight decay 10−4. For optimizing input mask parameters M , we use the Adam optimizer (Kingma & Ba, 2014) with η2 = 0.0001 and weight decay 10−5(MNIST)/10−6(MS-Celeb-1M). To
stabilize the training process, we exponentially decay the energy budget to the target budget, and also use this trick in MP training (i.e. decaying the sparsity budget) for fair comparisons.
6.1 IMAGENET
We set an energy budget to be less than the minimal energy consumption among the three baseline methods. We use the same performance metric (i.e. top-5 test accuracy) and hardware parameters, i.e., eMAC, eDRAM, ecache, eRF, sh, sw, kW , kX , as described in the EAP paper (Yang et al., 2017). We initialize all the DNNs by a pre-trained dense model, which is also used to set up the knowledge distillation regularization. The top-5 test accuracies on the dense models are 79.1% (AlexNet), 80.5% (SqueezeNet), and 90.5% (MobileNetV2). We use batch size 128 and train all the methods with 30 epochs. For SSL and NetAdapt, we apply 20 additional epochs to achieve comparable results. We implement the projection operation PΩ(Ebudget) on GPU, and it takes < 0.2s to perform it in our experiments. The detailed wall-clock result is included in the Appendix.
Table 1 shows the top-5 test accuracy drop and energy consumption of various methods compared to the dense model. Our training framework consistently achieves a higher accuracy with a lower
energy consumption under the same energy budget. For instance on AlexNet, under a smaller energy budget (26% < 27%), our method achieves lower accuracy drop over EAP (0.5% vs. 0.8%). The advantage is also evident in SqueezeNet and MobileNetV2 that are already light-weight by design. EAP does not report data on MobileNetV2. We observe that weight sparsity is not a good proxy for energy consumption. Our method achieves lower energy consumption despite having higher density.
Figure 2 comprehensively compares our method with prior work. Solid markers represent DNNs trained from our framework under different energy budgets (x-axis). Empty markers represent DNNs produced from previous techniques. DNNs trained by our method have lower energies with higher accuracies (i.e., solid markers are closer to the bottom-left corner than empty markers). For instance on SqueezeNet, our most energy-consuming DNN still reduces energy by 23% while improves accuracy by 0.2% compared to EAP.
6.2 MNIST AND MS-CELEB-1M
0
10
20
30 0.0
1.0
1.0
MNIST and MS-Celeb-1M (Guo et al., 2016) represent datasets where inputs have regular patterns that are amenable to input masking. For instance, MS-Celeb-1M is a face image dataset and we use its aligned face images where most of the facial features are located in the center of an image. In such scenarios, training input masks lets us control the sparsity of the layer inputs and thus further reduce energy than merely pruning model parameters as in conventional methods. We do not claim that applying input mask is a general technique; rather, we demonstrate its effectiveness when applicable. We compare our method with MP and SSL using
LeNet-5 and MobileNetV2 for these two datasets, respectively. The pre-trained dense LeNet-5 has 99.3% top-1 test accuracy on MNIST, and the dense MobileNetV2 has 65.6% top-5 test accuracy on MS-Celeb-1M. EAP does not report data on these two datasets. Similar to the evaluation on ImageNet, we set the energy budget to be lower than the energy consumptions of MP and SSL. We use batch size 32 on MNIST and 128 on MS-Celeb-1M, and number of epochs is set the same as the ImageNet experiments. Table 2 compares the energy consumption and accuracy drop. Our method consistently achieves higher accuracy with lower energy under the same or even smaller energy budget. We visualize the sparsity of the learned input masks in Figure 3.
7 CONCLUSION
This paper demonstrates that it is possible to train DNNs with quantitative energy guarantees in an end-to-end fashion. The enabler is an energy model that relates the DNN inference energy to the DNN parameters. Leveraging the energy model, we augment the conventional DNN training with an energy-constrained optimization process, which minimizes the accuracy loss under the constraint of a given energy budget. Using an efficient algorithm, our training framework generates DNNs with higher accuracies under the same or lower energy budgets compared to prior art.
APPENDICES
DETAIL OF ENERGY CONSUMPTION FOR DATA ACCESS
FULLY CONNECTED LAYER
To multiply X(v) ∈ Rc and W (v) ∈ Rc×d, each nonzero element of W (v) is used once but loaded three times, once each from DRAM, cache and RF, respectively. Thus, the number of DRAM, cache, and RF accesses for weight matrix W (v) is:
NweightsDRAM = N weights cache = N weights RF = ‖W (v)‖0. (15)
Input X(v) is fed into the systolic array dd/swe times, where sw denotes the the systolic array width. Thus, the number of cache accesses for X(v) is:
N inputcache = dd/swe‖X (v)‖0. (16)
Let kX be the cache size for inputX(v). If kX is less than ‖X(v)‖0, there are ‖X(v)‖0−kX elements that must be reloaded from DRAM every time. The rest kX elements need to load from only DRAM once as they will always reside in low-level memories. Thus, there are dd/swe(‖X(v)‖0− kX) + kX DRAM accesses for X(v). In addition, the output vector of the FC layer (result of X(v)W (v)) needs to be written back to DRAM, which further incurs d DRAM accesses. Thus, The total number of DRAM accesses to retrieve X(v) is:
N inputDRAM = dd/swemax(0, ‖X (v)‖0 − kX) + min(kX , ‖X(v)‖0) + d. (17)
Each input element is loaded from RF once for each MAC operation, and there are two RF accesses incurred by accumulation for each MAC operation (one read and one write). Thus, the total number of RF accesses related to X(v) is:
N inputRF = d‖X (v)‖0 + 2‖W (v)‖0. (18)
In summary, the data access energy of a fully connected layer v is expressed as follows, in which each component follows the derivations in Equation (15) through Equation (18):
E (v) data = eDRAM(N input DRAM +N weights DRAM ) + ecache(N input cache +N weights cache ) + eRF(N input RF +N weights RF ). (19)
CONVOLUTION LAYER
Similar to a FC layer, the data access energy of a CONV layer u is modeled as:
E (u) data = eDRAM(N input DRAM +N weights DRAM ) + ecache(N input cache +N weights cache ) + eRF(N input RF +N weights RF ). (20)
The notations are the same as in FC layer. We now show how the different components are modeled.
To convolve W (u) ∈ Rd×c×r×r with X(u) ∈ Rc×h×w, each nonzero element in the weight tensor W (u) is fed into the systolic array dh′w′/she times, where sh denotes the height of the systolic array and h′ and w′ are dimension parameters of X(u). Thus,
Nweightscache = dh ′w′/she‖W (u)‖0. (21)
Similar to the FC layer, the number of RF accesses for W (u) during all the MAC operations is:
NweightsRF = h ′w′‖W (u)‖0. (22)
Let kW be the cache size for the weight matrix W (u). If ‖W (u)‖0 > kW , there are kW nonzero elements of W (u) that would be accessed from DRAM only once as they would reside in the cache, and the rest ‖W (u)‖0 − kW elements would be accessed from DRAM by dh′w′/she times. Thus,
NweightsDRAM = dh ′w′/shemax(0, ‖W (u)‖0 − kW ) + min(kW , ‖W (u)‖0). (23)
Let kX be the cache size for input X(u). If every nonzero element in X(u) is loaded from DRAM to cache only once, N inputDRAM would simply be ‖X(u)‖0. In practice, however, the cache size kX is much
smaller than ‖X(u)‖0. Therefore, some portion of X(u) would need to be re-loaded. To calculate the amount of re-loaded DRAM access, we observe that in real hardware X(u) is loaded from DRAM to the cache at a row-granularity.
When the input X(u) is dense, there are at least cw elements loaded at once. In this way, the cache would first load bkX/(cw)c rows from DRAM, and after the convolutions related to these rows have finished, the cache would load the next bkX/(cw)c rows in X(u) for further processing. The rows loaded in the above two rounds have overlaps due to the natural of the convolution operation. The number of overlaps Roverlap is dh/(bkX/cwc− r+ s)e− 1, and each overlap has cw(r− s) elements. Thus, Roverlap × cw(r − s) elements would need to be reloaded from DRAM. Finally, storing the outputs of the convolution incurs an additional dh′w′ DRAM writes. Summing the different parts together, the upper bound (X(u) is dense) number of DRAM accesses for X(u) is:
N inputDRAM = ‖X (u)‖0 + (dh/(bkX/cwc − r + s)e − 1)cw(r − s) + dh′w′. (24)
When the input X(u) is not dense, we can still count the exact number of elements in the overlaps Noverlap of the consecutive loading rounds, so we have:
N inputDRAM = ‖X (u)‖0 +Noverlap + dh′w′. (25)
Every nonzero element in the unfolded input X̄(u) would be fed into the systolic array dd/swe times (for grouped convolution, this number is divided by the number of groups). Each MAC operation introduces 2 RF accesses. Thus,
N inputcache = dd/swe‖X̄‖0, N input RF = d‖X̄ (u)‖0 + 2h′w′‖W (u)‖0. (26)
PROOF TO THEOREM 1
Proof. First, it is easy to see that (10) is equivalent to the following problem
max ξ is binary
〈Z Z, ξ〉, s.t. ξ ∈ Ω(Ebudget). (27)
Note that if the optimal solution to problem (27) is ξ̄, the solution to problem (10) can be obtained by Z ξ̄; given the solution to (10), the solution to (27) can be obtained similarly. Therefore, we only need to prove that (27) is equivalent to (12). Meeting the following two conditions guarantees that (27) and (12) are equivalent since they have identical objective functions:
1. Any optimal solution of problem (27) is in the constraint set of problem (12);
2. Any optimal solution of problem (12) is in the constraint set of problem (27).
Let us prove the first condition. Let ξ̂ be the optimal solution to (27). Then for any u ∈ U ∪ V , the elements of Z(u) selected by ξ̂(u) are the largest (in terms of magnitude) ‖ξ̂(u)‖0 elements of Z(u); otherwise there would exist at least one element that can be replaced by another element with a larger magnitude, which would increase the objective value in (27). Since ξ̂ ∈ Ω(Ebudget), according to the definition of A in (13), ξ̂ satisfies the constraint of (12).
Let us now prove the second condition. The definition of A in (13) show that there could at most be two different A(u) values for each element u, and the largest k elements in Z(u) always have the smaller value, i.e., α(u)1 + α (u) 3 . Let ξ̄ be the optimal solution to the knapsack problem (12). For any u ∈ U ∪ V , the elements selected by ξ̄(u) are also the largest elements in Z(u) in terms of magnitude; otherwise there would exist an element Z(u)j that has a larger magnitude but corresponds to a smaller A (u) j ((13) shows that A (u) i ≥ A (u) j when |Z (u) i | ≤ |Z (u) j |). This would contradict the fact that ξ̄ is optimal. In addition, ξ̄ meets the constraint in problem (12). Therefore, ξ̄ ∈ Ω(Ebudget). It completes the proof.
AN (1 + )-APPROXIMATE SOLUTION FOR PROBLEM (10)
Theorem 3. For the projection problem (10), there exists an efficient approximation algorithm that has a computational complexity of O (( n+ (|U |+|V |) 3
2
) log nmax(A)min(A+) ) and generates a solution
W ′ ∈ Ω(Ebudget) that admits
‖W ′ − Z‖2 ≤ ∥∥∥∥PΩ( Ebudget
1+O( )
)(Z)− Z ∥∥∥∥2 , (28)
where min(A+) is the minimum of the positive elements in A.
|U | and |V | denote the number of CONV and FC layers, respectively. They are very small numbers that can be treated as constants here. Thus, the computational complexity for our problem is reduced to Õ(n+ 1 2 ), where Õ omits the logarithm term. In the following, we will prove this theorem by construction.
PROBLEM FORMULATION
Definition 1. Inverted knapsack problem. Given n objects I := {(vi, wi)}ni=1 each with weight wi > 0, and value vi ≥ 0, define hI(x) to be the smallest weight budget to have the total value x:
hI(x) := min ξ∈{0,1}n n∑ i=1 wiξi (29)
s. t. n∑ i=1 viξi ≥ x
We are more interested in the case that the weights of n objects are in m clusters, i.e. there are only m distinct weights,
|{wi}ni=1| = m.
In our case, m is proportional to the number of layers in DNN, and n is the number of all the learnable weights in W , so m n. Definition 2. Inverse of step function. The inverse of the step function f is defined as the maximal x having the function value y:
f−1(y) := max f(x)≤y x (30)
Observation The inverse of the step function h−1I (y) is just the maximal value we can get given the weight budget, i.e. the original knapsack problem:
h−1I (y) = max ξ∈{0,1}n n∑ i=1 viξi, s. t. n∑ i=1 wiξi ≤ y. (31)
Observation Given a step function with l breakpoints, its inverse can be generated with O(l) time complexity, and vice versa.
Thus, given the step function of hI in (29) which has l breakpoints, we can get h−1I (i.e. the original knapsack problem) within O(l) time complexity. Definition 3. w-uniform. Step function f is w-uniform if the ranges of f is from −∞, 0, w, 2w, ..., lw.
Observation If all the objects in I have the same weight w, i.e. m = 1, then the function hI(x) is nondecreasing and w-uniform. Moreover, its breakpoints are:
(0, 0), (v1, w), (v1 + v2, 2w), ..., ( n∑ i=1 vi, nw ) ,
if the objects’ indices follows the decreasing order in terms of the values, i.e. v1 ≥ v2 ≥ ... ≥ vn. Thus we can get all possible function values of hI(x):
hI(x) = kw, ∀x ∈ ( k−1∑ i=1 vi, k∑ i=1 vi ] .
Definition 4. (min, +)-convolution. For functions f, g, the (min, +)-convolution is:
(f ⊕ g)(x) = min x′
(f(x′) + g(x− x′)).
Observation If object sets I1 ∩ I2 = ∅, then
fI1∪I2 = fI1 ⊕ fI2 .
Observation The inverse of (min, +)-convolution between w-uniform function f and w-uniform function g is the (max, +)-convolution between f−1 and g−1:
(f ⊕ g)−1(y) = max y′∈{0,1w,...,lw} (f−1(y′) + g−1(y − y′)). (32)
Lemma 4. For any f and g nonnegative step functions, given an arbitrary number b, we always have
min{f ⊕ g, b} = min{min{f, b} ⊕min{g, b}, b} (33)
Proof. Given any x, let z ∈ Arg minx′ f(x′) + g(x − x′) and z̄ ∈ Arg minx′ min(f(x′), b) + min(g(x− x′), b), so we have (f ⊕ g)(x) = f(z) + g(x− z) and (min{f, b} ⊕min{g, b})(x) = min(f(z̄), b) + min(g(x− z̄), b). Consider the following cases:
1. (f ⊕ g)(x) ≥ b. In this case, we claim that (min{f, b} ⊕min{g, b})(x) ≥ b. We prove it by contradiction. Suppose (min{f, b} ⊕min{g, b})(x) < b which implies min(f(z̄), b) + min(g(x − z̄), b) < b. Because both f and g are nonnegative, we have f(z̄) < b and g(x − z̄) < b which imply min(f(z̄), b) + min(g(x − z̄), b) = f(z̄) + g(x − z̄) < b, However, this contradicts (f ⊕ g)(x) ≥ b. Therefore, we have min((f ⊕ g)(x), b) = min((min{f, b} ⊕min{g, b})(x), b) = b.
2. (f ⊕ g)(x) < b. In this case, we have f(z) < b and g(x − z) < b, so min(f(z̄), b) + min(g(x− z̄), b) ≤ min(f(z), b)+min(g(x−z), b) = f(z)+g(x−z) = (f ⊕g)(x) < b. Since both f and g are nonnegative, we have f(z̄) < b and g(x − z̄) < b which imply min(f(z̄), b) + min(g(x − z̄), b) = f(z̄) + g(x − z̄) ≥ (f ⊕ g)(x). Therefore, we have min(f(z̄), b) + min(g(x − z̄), b) = f(z) + g(x − z) ⇔ (min{f, b} ⊕ min{g, b})(x) = (f ⊕ g)(x).
EFFICIENCY OF (MIN, +)-CONVOLUTION
Lemma 5. Let f and g be nondecreasing w-uniform functions with O(l) breakpoints, the (min, +)-convolution f ⊕ g (having O(l) breakpoints) can be generated with O(l2) time complexity.
Proof. Firstly, we compute the inverse representation of f and g, i.e. compute f−1 and g−1 from Equation (30). The inverse representation can be computed in O(l) time (proportional to the number of breakpoints). From Equation (32), we can compute the inverse of f ⊕ g. For each y ∈ {0, 1w, ..., 2lw}, function (f ⊕ g)−1(y) can be computed in O(l) time by brute force. Thus a total O(l2) is enough to get (f ⊕ g)−1 which has O(l) breakpoints. We can get f ⊕ g via (f ⊕ g)−1 by the inverse definition (30) in O(l) time.
Lemma 6. Let f and g be nondecreasing step functions with l breakpoints in total, min{f ⊕ g, b} can be approximated by a step function φb with O(l + 1 2 ) complexity and 2 b additive error, i.e. min{f ⊕ g, b} ≤ φb ≤ min{f ⊕ g, b}+ 2 b. The resultant function φb has O(1/ ) breakpoints.
Proof. We can construct ( b)-uniform functions f ′b, g ′ b which have d1/ e breakpoints:
f ′b(x) =
⌈ min(b, f(x))
b
⌉ b, g′b(x) = ⌈ min(b, g(x))
b
⌉ b.
This needsO(l) computational complexity. From Lemma 5, we can compute f ′b⊕g′b withO( 1 2 ) time complexity and φb = min{f ′b ⊕ g′b, b} has O(1/ ) breakpoints. Because f ′b and g′b are constructed by ceiling min{f, b} and min{g, b}, we have:
min{f, b} ⊕min{g, b} ≤ f ′b ⊕ g′b ≤ min{f, b} ⊕min{g, b}+ 2 b,
which implies
min{min{f, b} ⊕min{g, b}, b} ≤ min{f ′b ⊕ g′b, b} ≤ min{min{f, b} ⊕min{g, b}, b}+ 2 b.
From Lemma 4, we know that min{min{f, b}⊕min{g, b}, b} = min{f ⊕ g, b}, so it completes the proof.
Lemma 7. Let f1, f2, ..., fm be nondecreasing step functions with l breakpoints in total, min{f1 ⊕ f2 ⊕ ... ⊕ fm, b} can be approximated by a step function ψb with O(l + m/ 2) computational complexity and m b additive error. The resultant function ψb has O(1/ ) breakpoints.
Proof. From Lemma 6, we have shown the case m = 2. For general m > 2, we can construct a binary tree to approximate pairs of functions, e.g., if m = 4, we can firstly approximate ψ(1) ≈ min{f1 ⊕ f2, b}, and ψ(2) ≈ min{f3 ⊕ f4, b}, then approximate ψ(3)b ≈ min{ψ(1) ⊕ ψ(2), b}. By this way, we construct a binary tree which has O(logm) depth and O(m) nodes. In the beginning, we use ceil function to construct m new b-uniform functions:
f ′i,b(x) =
⌈ min(b, fi(x))
b
⌉ b,∀i ∈ {1, 2, ...,m}.
Then we can use the binary tree to “merge” all the m functions in pairs, via O(logm) iterations. Without loss of generality, we assume m is a power of two. We can recursively merge t functions into t/2 functions:
1. Initialize t = m, g′i,b = f ′ i,b,∀i ∈ {1, ..., t}.
2. Reassign g′i,b = min{g′2i−1,b ⊕ g′2i,b, b},∀i ∈ {1, ..., t/2}. According to Lemma 6, the number of break points of min{g′2i−1,b ⊕ g′2i,b, b} is still O(1/ ).
3. t = t/2. If t > 1, go back to Step 2.
4. Return ψb := min{g′1,b, b}.
For this binary tree, functions of the bottom leaf nodes have b additive error, and every (min, +)- convolution f ′ ⊕ g′ will accumulate the additive error from the two functions f ′ and g′. The root node of the binary tree will accumulate the additive errors from all the m leaf nodes, thus the resultant function ψb ≤ min{f1⊕ ...⊕ fm, b}+m b. For the computational complexity, initializing f ′i,b takes O(l), Step 1 takes O(l), Step 2 and 3 take O(m/ 2) (since there are O(m) nodes in the binary tree), and Step 4 takes O(m/ ). Therefore, there is O(l +m/ 2) in total.
Lemma 8. For the inverted knapsack problem defined in Equation (29), if all the n objects can be separated into m groups I1, ..., Im which have m distinct weights, there exists an approximate algorithm with computational complexity O((n+ m 3
2 ) log nmax(w) min(w) ) which can approximate hI by
h̃I : hI(x) ≤ h̃I(x) ≤ (1 +O( ))hI(x), ∀x.
Proof. Firstly, the step function hIi ,∀i ∈ {1, 2, ...,m} can be easily generated within O(n log n) by sorting the objects of each group according to their values (in descending order). From the definition of (min, +)-convolution, we know that hI = hI1 ⊕ ... ⊕ hIm . Let us construct an algorithm to approximate hI :
1. Construct a set B := {2inmax(w) ∈ [min(w), nmax(w)]; i ∈ Z≤0}, where min(w) and max(w) are the minimum and maximum weight of items respectively, and Z≤0 is the nonpositive integer set. We have |B| = O(log nmax(w)min(w) ).
2. For every b ∈ B, construct ψb to approximate min{hI1 ⊕ ...⊕ hIm , b} based on Lemma 7.
3. Construct function h̃−1I :
h̃−1I (y) = { ψ−1b (y), if b/2 < y ≤ b and y > min(B); ψ−1min(B)(y), if y ≤ min(B).
where min(B) is the minimum element in B. The resultant function h̃−1I (or h̃I ) has at most O( 1 log
nmax(w) min(w) ) breakpoints.
4. Compute the original function h̃I from h̃−1I .
According to the above procedure, for any hI(x) ∈ (b/2, b], h̃I(x) approximate hI(x) with additive error O(m b), so we have hI(x) ≤ h̃I(x) ≤ (1 + O(m ))hI(x). The algorithm takes O((n + m/ 2) log nmax(w)min(w) ), if we require the approximation factor to be 1 +O( ), i.e.,
hI(x) ≤ h̃I(x) ≤ (1 +O( ))hI(x), ∀x, we need
O ( (n+m3/ 2) log nmax(w)
min(w) ) time complexity.
Theorem 9. For the knapsack problem defined in Equation (12), if all the n objects have m distinct weights, there exists an approximate algorithm with computational complexity O((n + m3
2 ) log nmax(w) min(w) ) to generate a function h̃ −1 I satisfying:
h−1I
( y
1 +O( )
) ≤ h̃−1I (y) ≤ h −1 I (y), ∀y.
Proof. From Lemma 8, we have h̃I(x) ≤ (1 +O( ))hI(x) which implies
{x | (1 +O( ))hI(x) ≤ y} ⊆ {x | h̃I(x) ≤ y}. So
max hI(x)≤y/(1+O( )) x ≤ max h̃I(x)≤y
x⇔ h−1I (
y
1 +O( )
) ≤ h̃−1I (y).
Similarly, we can get {x | h̃I(x) ≤ y} ⊆ {x | hI(x) ≤ y} from Lemma 8, so we have
max hI(x)≤y x ≥ max h̃I(x)≤y x⇔ h−1I (y) ≥ h̃ −1 I (y).
Let I be the set of objects whose weights are nonzero elements in A and values are the corresponding elements in Z Z, i.e. I+ = {(Z2i , Ai) | ∀i ∈ {1, 2, ..., |A|} and Ai > 0}, ξ̃+ be the solution corresponding to h̃−1I (Ebudget− ∑ u∈U∪V α (u) 4 ). Let ξ̃Ic+ = 1 and ξ̃I+ = ξ̃
+, where Ic+ = {(Z2i , Ai) | ∀i ∈ {1, 2, ..., |A|} and Ai = 0} is the complement of I+. Here we have m ≤ 2|U |+ |V | distinct values in A. According to Theorem 9, we have 〈Z Z, ξ̃〉 ≥ maxξ〈Z Z, ξ〉, s.t. 〈A, ξ〉 ≤ Ebudget− ∑ u∈U∪V α (u) 4
1+O( ) , which implies
〈Z Z, ξ̃〉 ≥ max ξ∈Ω(Ebudget/(1+O( ))) 〈Z Z, ξ〉.
From Theorem 9, we can directly get Theorem 3.
Algorithm 2: Greedy Algorithm to Solve Problem (12). Input: Z,A,Ebudget, {α(u)}u∈U∪V as in (12). Result: Greedy solution ξ̃ for problem (12).
1 Initialize b = 0, ξ = 0. 2 Generate the profit density δ:
δj =
{ (Zj)
2/Aj , if Aj > 0; ∞, if Aj = 0.
3 Sort δ, let I be the indices list of the sorted δ (in descending order). 4 foreach index j ∈ I do 5 b = b+Aj ; 6 If b > Ebudget − ∑ u∈U∪V α (u) 4 , exit loop; 7 ξj = 1; 8 end 9 ξ̃ = ξ.
PROOF TO THEOREM 2
Proof. From Theorem 1, we know the original projection problem (10) is equivalent to the knapsack problem (12). So proving the inequality (14) is equivalent to proving
〈Z Z, ξ̃〉 ≥ 〈Z Z, ξ∗〉 − Top‖ξ̃‖0+1((Z Z) A) ·R(ξ̃) (34) and
〈Z Z, ξ̃〉 ≥ 〈Z Z, ξ∗〉 − Top‖ξ̃‖0+1((Z Z) A) · (max(A)− gcd(A)), (35)
where ξ̃ is the greedy solution of knapsack problem corresponding to W ′′, and ξ∗ is the exact solution of knapsack problem corresponding to PΩ(Ebudget)(Z), i.e.,
W ′′ = Z ξ̃, PΩ(Ebudget)(Z) = Z ξ ∗.
Firstly, let us prove the inequality (35). If we relax the values of ξ to be in the range [0, 1] instead of {0, 1}, the discrete constraint is removed so that the constraint set becomes
∆ = { ξ | 0 ≤ ξ ≤ 1 and 〈A, ξ〉 ≤ Ebudget −
∑ u∈U∪V α (u) 4
} .
So the 0/1 knapsack problem is relaxed as a linear programming. This relaxed problem is called fractional knapsack problem, and there is a greedy algorithm (Dantzig, 1957) which can exactly solve the fractional knapsack problem. Slightly different from our Algorithm 2, the greedy algorithm for the fractional knapsack can select a fraction of the item, so its remaining budget is always zero. The optimal objective value of the fractional knapsack is
max ξ∈∆ 〈Z Z, ξ〉 = 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) ·R(ξ̃).
Since the constraint set of the fractional knapsack problem is a superset of the constraint of the original knapsack problem, we have 〈Z Z, ξ∗〉 ≤ max0≤ξ≤1〈Z Z, ξ〉, that leads to inequality (34). Secondly, we show that the inequality (35) is also true. Since all the coefficients in A are multiples of gcd(A), we can relax the original 0/1 knapsack problem in this way: for each item, split them to several items whose coefficients in the constraint are gcd(A), and the coefficients in the objective function are split equally. For the j-th item, the coefficient in the constraint is Aj and the coefficient in the objective function is (Z Z)j . It will be split into Aj/ gcd(A) items, and the j-th item is associated with coefficient (Z2j /Aj) · gcd(A) in the objective function. This relaxation gives us a new 0/1 knapsack problem, where all the items have the same coefficient in the constraint, so the optimal solution is just selecting the ones with the largest coefficients in the objective function. We can formulate this problem as a relaxed knapsack problem by replacing the constraint of ξ into ξ ∈ Γ, where
Γ = { ξ | for all j, ξj is a multiple of gcd(A)
Aj , 0 ≤ ξj ≤ 1, and 〈A, ξ〉 ≤ Ebudget − ∑ u∈U∪V α (u) 4
} .
All the elements of the solution are either 0 or 1 except the last picked one which corresponds to Top‖ξ̃‖0+1((Z Z) A). Let the (‖ξ̃‖0 + 1)-th largest element in (Z Z) A be indexed by t. We have 0 ≤ ξ̃t ≤ 1− gcd(A)/At. Therefore, comparing with the original 0/1 knapsack problem, we have
max ξ∈Γ 〈Z Z, ξ〉 ≤ 〈Z Z, ξ̃〉+ (Z Z)t · (1− gcd(A)/At)
= 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) ·At · (1− gcd(A)/At)
= 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) · (At − gcd(A))
≤ 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) · (max(A)− gcd(A))
Since {ξ | ξ is binary} ⊆ Γ, we have 〈Z Z, ξ∗〉 ≤ maxξ∈Γ〈Z Z, ξ〉. So we have the inequality (35).
SUPPLEMENTARY EXPERIMENT RESULTS
RESULTS OF BASELINE WITHOUT KNOWLEDGE DISTILLATION
Table 3 shows the energy and accuracy drop results of the baseline methods MP and SSL when the knowledge distillation is removed from their loss function. By using knowledge distillation, the results in Table 1 are much better. Therefore, we use knowledge distillation in all the experiments when it is applicable.
ENERGY-CONSTRAINED PROJECTION EFFICIENCY
The projection operation PΩ(Ebudget) in Algorithm 1 can be implemented on GPU. We measured its wall-clock time on a GPU server (CPU: Xeon E3 1231-v3, GPU: GTX 1080 Ti), and the result is shown in Table 4 (the time is averaged over 100 iterations). | 1. What is the main contribution of the paper regarding training neural networks?
2. What are the strengths and weaknesses of the proposed approach compared to standard compression methods?
3. How does the reviewer assess the novelty of the paper's content, particularly regarding the use of a mask for controlling the sparsity of network inputs?
4. What are the concerns regarding the empirical comparisons and hyperparameter tuning for the proposed method?
5. What are the limitations of the paper's theoretical analysis, especially regarding the greedy approximation strategy and the complexity of the energy constraint?
6. How does the reviewer evaluate the effectiveness and efficiency of the proposed method compared to state-of-the-art compression methods? | Review | Review
This paper describes a procedure for training neural networks via an explicit constraint on the energy budget, as opposed to pruning the model size as commonly done with standard compression methods. Comparative results are shown on a few data sets where the proposed method outperforms multiple different approaches. Overall, the concept is interesting and certainly could prove valuable in resource-constrained environments. Still I retain some reservations as detailed below.
My first concern is that this paper exceeds the recommended 8 page limit for reasons that are seemingly quite unnecessary. There are no large, essential figures/tables, and nearly the first 6 pages is just introduction and background material. Likewise the paper consumes a considerable amount of space presenting technical results related to knapsack problems and various epsilon-accurate solutions, but this theoretical content seems somewhat irrelevant and distracting since it is not directly related to the greedy approximation strategy actually used for practical deployment. Much of this material could have been moved to the supplementary so as to adhere to the 8 page soft limit. Per the ICLR reviewer instructions, papers deemed unnecessarily long relative to this length should be judged more critically.
Another issue relates to the use of a mask for controlling the sparsity of network inputs. Although not acknowledged, similar techniques are already used to prune the activations of deep networks for compression. In particular, various forms of variational dropout essentially use multiplicative weights to remove the influence of activations and/or other network components similar to the mask M used is this work. Representative examples include Neklyudov et al., "Structured Bayesian Pruning via Log-Normal Multiplicative Noise," NIPS 2017 and Louizos et al., "Bayesian Compression for Deep Learning," NIPS 2017, but there are many other related alternatives using some form of trainable gate or mask, possibly stochastic, to affect pruning (the major ML and CV conferences over the past year have numerous related compression papers). So I don't consider this aspect of the paper to be new in any significant way.
Moreover, for the empirical comparisons it would be better to compare against state-of-the-art compression methods as opposed to just the stated MP and SSL methods from 2015 and 2016 respectively. Despite claims to the contrary on page 9, I would not consider these to be state-of-the-art methods at this point.
Another comment I have regarding the experiments is that hyperparameters and the use of knowledge distillation were potentially tuned for the proposed method and then simultaneously applied to the competing algorithms for the sake of head-to-head comparison. But to me, if these enhancements are to be included at all, tuning must be done carefully and independently for each algorithm. Was this actually done? Moreover it would have been nice to see results without the confounding influence of distillation to isolate sources of improvement, but no ablation studies were presented.
Finally, regarding the content in Section 5, the paper carefully presents an explicit bound on energy that ultimately leads to a constraint that is NP-hard just to project on to, although approximate solutions exist that depend on some error tolerance. However, even this requires an algorithm that is dismissed as "complicated." Instead a greedy alternative is derived in the Appendix which presumably serves as the final endorsed approach. But at this point it is no longer clear to me exactly what performance guarantees remain with respect to the energy bound. Theorem 3 presents a fairly inscrutable bound, and it is not at all transparent how to interpret this in any practical sense. Note that after Theorem 3, conditions are described whereby an optimal projection can be obtained, but these seem highly nuanced, and unlikely to apply in most cases.
Additionally, it would appear that crude bounds on the energy could also be introduced by simply penalizing/constraining the sparsity on each layer, which leads to a much simpler projection step. For example, a simple affine function of the L0 norm would be much easier to optimize and could serve as a loose bound on the energy, given that the latter should be a non-decreasing function of the L0 norm. Any idea how such a bound compares to those presented given all the approximations and greedy steps that must be included?
Other comments:
- As an implementation heuristic, the proposed Algorithm 1 gradually decays the parameter q, which controls the sparsity of the mask M. But this will certainly alter the energy budget, and I wonder how important it is to employ a complex energy constraint if minimization requires this type of heuristic.
- I did not see where the quantity L(M,W) embedded in eq. (17) was formally defined, although I can guess what it is.
- In general it is somewhat troublesome that, on top of a complex, non-convex deep network energy function, just the small subproblem required for projecting onto the energy constraint is NP-hard. Even if approximations are possible, I wonder if this extra complexity is always worth it relative so simple sparsity-based compression methods which can be efficiently implemented with exactly closed-form projections.
- In Table 1, the proposed method is highlighted as having the smallest accuracy drop on SqueezeNet. But this is not true, EAP is lower. Likewise on AlexNet, NetAdapt has an equally optimal energy. |
ICLR | Title
Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking
Abstract
Deep Neural Networks (DNNs) are increasingly deployed in highly energyconstrained environments such as autonomous drones and wearable devices while at the same time must operate in real-time. Therefore, reducing the energy consumption has become a major design consideration in DNN training. This paper proposes the first end-to-end DNN training framework that provides quantitative energy consumption guarantees via weighted sparse projection and input masking. The key idea is to formulate the DNN training as an optimization problem in which the energy budget imposes a previously unconsidered optimization constraint. We integrate the quantitative DNN energy estimation into the DNN training process to assist the constrained optimization. We prove that an approximate algorithm can be used to efficiently solve the optimization problem. Compared to the best prior energy-saving methods, our framework trains DNNs that provide higher accuracies under same or lower energy budgets.
1 INTRODUCTION
Deep Neural Networks (DNNs) have become the fundamental building blocks of many emerging application domains such as computer vision (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014), speech recognition (Hinton et al., 2012), and natural language processing (Goldberg, 2016). Many of these applications have to operate in highly energy-constrained environments. For instance, autonomous drones have to continuously perform computer vision tasks (e.g., object detection) without a constant power supply. Designing DNNs that can meet severe energy budgets has increasingly become a major design objective.
The state-of-the-art model compression algorithms adopt indirect techniques to restrict the energy consumption, such as pruning (or sparsification) (He et al., 2018; Han et al., 2015a; Liu et al., 2015; Zhou et al., 2016; Li et al., 2016; Wen et al., 2016) and quantization (Gong et al., 2014; Wu et al., 2016; Han et al., 2015a; Courbariaux et al., 2015; Rastegari et al., 2016). These techniques are agonistic to energy consumption; rather they are designed to reduce the amount of computations and the amount of model parameters in a DNN, which do not truly reflect the energy consumption of a DNN. As a result, these indirect approaches only indirectly reduce the total energy consumption. Recently, Energy-Aware Pruning (EAP) (Yang et al., 2017) proposes a more direct manner to reduce the energy consumption of DNN inferences by guiding weight pruning using DNN energy estimation, which achieves higher energy savings compared to the indirect techniques.
However, a fundamental limitation of all existing methods is that they do not provide quantitative energy guarantees, i.e., ensuring that the energy consumption is below a user-specified energy budget. In this paper, we aspire to answer the following key question: how to design DNN models that satisfy a given energy budget while maximizing the accuracy? This work provides a solution to this question through an end-to-end training framework. By end-to-end, we refer to an approach that directly meets the energy budget without relying heuristics such as selectively restoring pruned weights and layer by layer fine-tuning (Han et al., 2015b; Yang et al., 2017). These heuristics are effective in practice but also have many hyper-parameters that must be carefully tuned.
Our learning algorithm directly trains a DNN model that meets a given energy budget while maximiz-
ing model accuracy without incremental hyper-parameter tuning. The key idea is to formulate the DNN training process as an optimization problem in which the energy budget imposes a previously unconsidered optimization constraint. We integrate the quantitative DNN energy estimation into the DNN training process to assist the constrained optimization. In this way, a DNN model, once is trained, by design meets the energy budget while maximizing the accuracy.
Without losing generality, we model the DNN energy consumption after the popular systolic array hardware architecture (Kung, 1982) that is increasingly adopted in today’s DNN hardware chips such as Google’s Tensor Processing Unit (TPU) (Jouppi et al., 2017), NVidia’s Tensor Cores, and ARM’s ML Processor. The systolic array architecture embodies key design principles of DNN hardware that is already available in today’s consumer devices. We specifically focus on pruning, i.e., controlling the DNN sparsity, as the main energy reduction technique. Overall, the energy model models the DNN inference energy as a function of the sparsity of the layer parameters and the layer input.
Given the DNN energy estimation, we formulate DNN training as an optimization problem that minimizes the accuracy loss under the constraint of a certain energy budget. The key difference between our optimization formulation and the formulation in a conventional DNN training is two-fold. First, our optimization problem considers the energy constraint, which is not present in conventional training. Second, layer inputs are non-trainable parameters in conventional DNN training since they depend on the initial network input. We introduce a new concept, called input mask, that enables the input sparsity to be controlled by a trainable parameter, and thus increases the energy reduction opportunities. This lets us further reduce energy in scenarios with known input data pattern.
We propose an iterative algorithm to solve the above optimization problem. A key step in optimization is the projection operation onto the energy constraint, i.e., finding a model which is closest to the given (dense) model and satisfies the energy constraint. We prove that this projection can be casted into a 0/1 knapsack problem and show that it can be solved very efficiently. Evaluation results show that our proposed training framework can achieve higher accuracy under the same or lower energy compared to the state-of-the-art energy-saving methods.
In summary, we make the following contributions in this paper:
• To the best of our knowledge, this is the first end-to-end DNN training framework that provides quantitative energy guarantees;
• We propose a quantitative model to estimate the energy consumption of DNN inference on TPU-like hardware. The model can be extended to model other forms of DNN hardware;
• We formulate a new optimization problem for energy-constrained DNN training and present a general optimization algorithm that solves the problem.
2 RELATED WORK
Energy-Agnostic Optimizations Most existing DNN optimizations indirectly optimize DNN energy through reducing the model complexity. They are agonistic to the energy consumption, and therefore cannot provide any quantitative energy guarantees.
Pruning, otherwise known as sparsification, is perhaps the most widely used technique to reduce DNN model complexity by reducing computation as well as hardware memory access. It is based on the intuition that DNN model parameters that have low-magnitude have little impact on the final prediction, and thus can be zeroed-out. The classic magnitude-based pruning (Han et al., 2015b) removes weights whose magnitudes are lower than a threshold. Subsequent work guides pruning using special structures (Liu et al., 2015; Zhou et al., 2016; Li et al., 2016; Wen et al., 2016; He et al., 2017), such as removing an entire channel, to better retain accuracy after pruning.
Quantization reduces the number of bits used to encode model parameters, and thus reducing computation energy and data access energy (Gong et al., 2014; Wu et al., 2016; Han et al., 2015a). The extreme case of quantization is using 1-bit to represent model parameters (Courbariaux et al., 2015; Rastegari et al., 2016). Such binary quantization methods are usually trained from scratch instead of quantizing a pre-trained DNN.
Energy-Aware Optimizations Recently, energy-aware pruning (EAP) (Yang et al., 2017) proposes to use a quantitative energy model to guide model pruning. Different from pure magnitude-based pruning methods, EAP selectively prunes the DNN layer that contributes the most to the total energy consumption. It then applies a sequence of fine-tuning techniques to retain model accuracy. The pruning step and fine-tuning step are alternated until the accuracy loss exceeds a given threshold.
Although EAP a promising first-step toward energy-aware optimizations, its key limitation is that it does not provide quantitative energy guarantees because it does not explicitly consider energy budget as a constraint. Our work integrates the energy budget as an optimization constraint in model training.
Latency-Guaranteed Compression Lately, model compression research has started providing guarantees in execution (inference) latency, which theoretically could be extended to providing energy guarantees as well. However, these methods are primarily search-based through either reinforcement learning (He et al., 2018) or greedy-search (Yang et al., 2018). They search the sparsity setting for every single layer to meet the given budget. Thus, they may require a large number of trials to achieve a good performance, and may not ensure that the resulting model accuracy is maximized.
3 MODELING DNN INFERENCE ENERGY CONSUMPTION
This section introduces the model of estimating energy consumption of a single DNN inference. We consider the widely-used feed-forward DNNs. Note that our proposed methodology can be easily extended to other network architectures as well. In this section, we first provide an overview of our energy modeling methodology (Section 3.1). We then present the detailed per-layer energy modeling (Section 3.2 and Section 3.3), which allow us to then derive the overall DNN energy consumption (Section 3.4). Our energy modeling results are validated against the industry-strength DNN hardware simulator ScaleSim (Samajdar et al., 2018).
DNN model sparsity (via pruning) is well recognized to significantly affect the execution efficiency and thus affect the energy consumption of a DNN model (He et al., 2018; Yang et al., 2017; Han et al., 2015a; Liu et al., 2015; Zhou et al., 2016). We thus use pruning as the mechanism to reduce energy consumption1. Note, however, that model sparsity is not the end goal of our paper; rather we focus on reducing the energy consumption directly. Many dedicated DNN hardware chips (a.k.a., Neural Processing Units, NPUs) (Jouppi et al., 2017; Chen et al., 2016; Han et al., 2016; Parashar et al., 2017) have been developed to directly benefit from model sparsity, and are already widely available in today’s consumer devices such as Apple iPhoneX, Huawei Mate 10, and Microsoft HoloLens. Our paper focuses on this general class of popular, widely-used DNN chips.
3.1 ENERGY MODELING OVERVIEW
A DNN typically consists of a sequence of convolution (CONV) layers and fully connected (FC) layers interleaved with a few other layer types such as Rectified Linear Unit (ReLU) and batch normalization. We focus mainly on modeling the energy consumption of the CONV and FC layers. This is because CONV and FC layers comprise more than 90% of the total execution time during a DNN inference (Chen et al., 2016) and are the major energy consumers (Han et al., 2015a; Yang et al., 2017). Energy consumed by other layer types is insignificant and can be taken away from the energy budget as a constant factor.
A DNN inference’s energy consumption is tied to the underlying hardware that performs the inference. In particular, we assume a systolic-array-based DNN hardware architecture. Systolic array (Kung, 1982) has long been know as an effective approach for matrix multiplication. Many DNN hardware architectures adopt the systolic array, most notably the Google Tensor Processing Unit (TPU) (Jouppi
1Quantization is another useful mechanism to reduce energy consumption. It is orthogonal to the pruning mechanism and they could be combined. This paper specifically focuses on the pruning mechanism.
et al., 2017), Nvidia’s Tensor Cores in their most recent Volta GPUs, and ARM’s ML Processor. Targeting systolic-array-based DNN hardware ensures that our approach has a wide applicability. However, our modeling and training strategies can generally be applied to other DNN architectures.
Figure 1 shows the overall hardware architecture. The systolic array comprises of several compute units that perform the Multiply-and-Accumulate (MAC) operation, which conducts the following computation: a← a+ (b× c), where b and c are the two scalar inputs and a is the scalar intermediate result called “partial sum.” MAC operation is the building block for matrix multiplication. The MAC units are organized in a 2-D fashion. The data is fed from the edges, both horizontally and vertically, which then propagate to the MAC units within the same row and columns.
We decompose the energy cost into two parts: computation energy Ecomp and data access energy Edata. Ecomp denotes the energy consumed by computation units, and Edata denotes the energy consumed when accessing data from the hardware memory. Since we mainly use pruning as the energy reduction technique, we now model how Ecomp and Edata are affected by DNN sparsity.
3.2 ENERGY CONSUMPTION FOR COMPUTATION
CONV layers perform convolution and FC layer perform matrix-vector multiplication. Both operations can be generalized to matrix-matrix multiplication, which involves only the MAC operation (Chetlur et al., 2014; Jouppi et al., 2017). Figure 1 illustrates how a matrix-matrix multiplication is carried out on the systolic array hardware. Given X and W , the systolic array computes XW by passing each row of X to each row in the systolic array and passing each column of W to each column in the systolic array. If the width of the systolic array, denoted by sw, is less than the width of W , the hardware will fold W column-wise in strides of sw. Similarly, if the height of X is greater than the height of the systolic array (sh), X is folded row-size in strides of sh. Figure 1 illustrates a 2× 2 systolic array multiplying two 4× 4 matrices. Both matrices are folded twice in strides of 2. Critically, if either inputs of a MAC operation is zero, we can skip the MAC operation entirely and thus save the computation energy. At a high-level, the total computation energy, Ecomp, can be modeled as eMACNMAC, where eMAC denotes the energy consumption of one MAC operation whereas NMAC denotes the total number of MAC operations that are actually performed. The challenge is to identify NMAC for CONV and FC layers, which we discuss below.
Fully connected layer LetX(v) ∈ R1×c be the input vector andW (v) ∈ Rc×d be the weight matrix of the FC layer v. The FC layer performs matrix-vector multiplication X(v)W (v). The number of MAC operations NMAC is sum(supp(X)supp(W )), where supp(T ) returns a binary tensor indicating the nonzero positions of tensor T . So the computation energy for a fully connected layer v:
E(v)comp = eMACsum(supp(X (v))supp(W (v))) ≤ eMAC‖W (v)‖0, (1)
where the equality is reached when the input is dense.
Convolution layer The CONV layer performs the convolution operation between a 4-D weight (also referred to as kernel or filter) tensor and a 3-D input tensor. Let W (u) ∈ Rd×c×r×r be the weight tensor, where d, c, and r are tensor dimension parameters. Let X(u) ∈ Rc×h×w be the input tensor, where h and w are the input height and width. The convolution operation in the CONV layer u generates a 3-dimensional tensor:
(X(u) ∗W (u))j,y,x = c∑ i=1 r−1∑ r′,r′′=0 X (u) i,y+r′,x+r′′W (u) j,i,r′,r′′ , (2)
where x, y indicate the position of the output tensor, which has height h′ = b(h+ 2p− r)/sc+ 1 and width w′ = b(w + 2p− r)/sc+ 1 (p is the convolution padding and s is the convolution stride). Tensor convolution (2) can be seen as a special matrix-matrix multiplication (Chellapilla et al., 2006; Chetlur et al., 2014). Specifically, we would unfold the tensor X(u) to a matrix X̄(u) ∈ Rh′w′×cr2 , and unfold the tensor W (u) to a matrix W̄ (u) ∈ Rcr2×d. X̄(u) and W̄ (u) are then multiplied together in the systolic array to compute the equivalent convolution result between X(u) and W (u).
Nonzero elements inX(u) andW (u) incur actual MAC operations. Thus,NMAC = sum(supp(X(u))∗ supp(W (u))) ≤ h′w′‖W (u)‖0 (the equality means the input is dense), resulting in the following computation energy of a CONV layer u:
E(u)comp = eMACsum(supp(X (u)) ∗ supp(W (u))) ≤ eMACh′w′‖W (u)‖0. (3)
3.3 ENERGY CONSUMPTION FOR DATA ACCESS
Accessing data happens in every layer. The challenge in modeling the data access energy is that modern hardware is equipped with a multi-level memory hierarchy in order to improve speed and save energy (Hennessy & Patterson, 2011). Specifically, the data is originally stored in a large memory, which is slow and energy-hungry. When the data is needed to perform certain computation, the hardware will load it from the large memory into a smaller memory that is faster and consume less energy. If the data is reused often, it will mostly live in the small memory. Thus, such a multi-level memory hierarchy saves overall energy and improves overall speed by exploiting data reuse.
Without losing generality, we model a common, three-level memory hierarchy composed of a Dynamic Random Access Memory (DRAM), a Cache, and a Register File (RF). The cache is split into two halves: one for holding X (i.e., the feature map in a CONV layer and the feature vector in a FC layer) and the other for holding W (i.e., the convolution kernel in a CONV layer and the weight matrix in an FC layer). This is by far the most common memory hierarchy in DNN hardware such as Google’s TPU (Jouppi et al., 2017; Chen et al., 2016; Zhu et al., 2018; Han et al., 2016). Data is always loaded from DRAM into cache, and then from cache to RFs.
In many today’s DNN hardwares, the activations and weights are compressed in the dense form, and thus only non-zero values will be accessed. This is done in prior works (Chen et al., 2016; Parashar et al., 2017). Therefore, if the value of the data that is being loaded is zero, the hardware can skip the data access and thereby save energy. There is a negligible amount of overhead to “unpack” and “pack” compressed data, which we simply take away from the energy budget as a constant factor. This is also the same modeling assumption used by Energy-Aware Pruning (Yang et al., 2017).
To compute Edata, we must calculate the number of data accesses at each memory level, i.e., NDRAM, Ncache, NRF. Let the unit energy costs of different memory hierarchies be eDRAM, ecache, and eRF, respectively, the total data access energy consumptionEdata will be eDRAMNDRAM +ecacheNcache + eRFNRF. We count the number of data accesses for both the weights and input, then combine them together. The detailed derivation of data access energy is included in the Appendix.
3.4 THE OVERALL ENERGY ESTIMATION FORMULATION
Let U and V be the sets of convolutional layers and fully connected layers in a DNN respectively. The superscript (u) and (v) indicate the energy consumption of layer u ∈ U and v ∈ V , respectively. Then the overall energy consumption of a DNN inference can be modeled by
E(X,W ) := ∑ u∈U (E(u)comp + E (u) data) + ∑ v∈V (E(v)comp + E (v) data), (4)
where X stacks input vectors/tensors at all layers and W stacks weight matrices/tensors at all layers.
4 ENERGY-CONSTRAINED DNN MODEL
Given the energy model presented in Section 3, we propose a new energy-constrained DNN model that bounds the energy consumption of a DNN’s inference. Different from prior work on model pruning in which energy reduction is a byproduct of model sparsity, our goal is to directly bound the energy consumption of a DNN while sparsity is just used as a means to reduce energy.
This section formulates training an energy-constrained DNN as an optimization problem. We first formulate the optimization constraint by introducing a trainable mask variable into the energy modeling to enforce layer input sparsity. We then define a new loss function by introducing the knowledge distillation regularizer that helps improve training convergence and reduce overfitting.
Controlling Input Sparsity Using Input Mask The objective of training an energy-constrained DNN is to minimize the accuracy loss while ensuring that the DNN inference energy is below a given budget, Ebudget. Since the total energy consumption is a function of ‖X(u)‖0 and ‖W (u)‖0, it is natural to think that the trainable parameters are X and W . In reality, however, X depends on the input to the DNN (e.g., an input image to an object recognition DNN), and thus is unknown during training time. Therefore, in conventional DNN training frameworks X is never trainable.
To include the sparsity of X in our training framework, we introduce a trainable binary mask M that is of the same shape of X , and is multiplied with X before X is fed into CONV or FC layers, or equivalently, at the end of the previous layer. For example, if the input to a standard CONV layer is X(u), the input would now be X(u) M (u), where denotes the element-wise multiplication. In practice, we do not really do this multiplication but only read X(u) on the nonzero positions of M (u).
Algorithm 1: Energy-Constrained DNN Training. Input: Energy budget Ebudget, learning rates η1, η2, mask sparsity decay step ∆q. Result: DNN weights W ∗, input mask M∗.
1 Initialize W = Wdense,M = 1, q = ‖M‖0 −∆q; 2 while True do
// Update DNN weights 3 while W has not converged do 4 W = W − η1∇̂W L̄(M,W ) ; // SGD step 5 W = PΩ(Ebudget)(W ) ; // Energy constraint projection for weights W 6 end 7 If previous_accuracy > current_accuracy, exit loop with previous W and M ;
// Update input mask 8 while M has not converged do 9 M = M − η2∇̂M L̄(M,W ) ; // SGD step
10 Clamp values of M into [0, 1]: assign 1 (or 0) to the values if they exceeds 1 (or negative); 11 M = P‖M‖0≤q(M) ; // L0 constraint projection for input mask M 12 end 13 Round values of M into {0, 1}; 14 Decay the sparsity constraint q = q −∆q; 15 end 16 W ∗ = W,M∗ = M .
With the trainable mask M , we can ensure that ‖X(u) M (u)‖0 ≤ ‖M (u)‖0, and thereby bound the sparsity of the input at training time. In this way, the optimization constraint during training becomes E(M,W ) ≤ Ebudget, where E(M,W ) denotes the total DNN inference energy consumption, which is a function of X and W (as shown in Equation (4)), and thus a function of M and W .
Knowledge Distillation as a Regularizer Directly optimizing over the constraint would likely lead to a local optimum because the energy model is highly non-convex. Recent works (Mishra & Marr, 2017; Tschannen et al., 2017; Zhuang et al., 2018) notice that knowledge distillation is helpful in training compact DNN models. To improve the training performance, we apply the knowledge distillation loss (Ba & Caruana, 2014) as a regularization to the conventional loss function. Intuitively, the regularization uses a pre-trained dense model to guide the training of a sparse model. Specifically, our regularized loss function is:
L̄λ,Wdense(M,W ) := (1− λ)L(M,W ) + λEX [‖φ(X;W )− φ(X;Wdense)‖2/|φ(·;W )|], (5) where Wdense is the original dense model, and L(M,W ) is the original loss, e.g., cross-entropy loss for classification task. φ(X;W ) is the network’s output (we use the output before the last activation layer as in Ba & Caruana (2014)), |φ(·;W )| is the network output dimensionality and 0 ≤ λ ≤ 1 is a hyper parameter similar to other standard regularizations.
Thus, training an energy-constrained DNN model is formulated as an optimization problem:
min M,W L̄λ,Wdense(M,W ) s.t. E(M,W ) ≤ Ebudget. (6)
5 OPTIMIZATION
This section introduces an algorithm to solve the optimization problem formulated in (6). The overall algorithm is shown in Algorithm 1. Specifically, the algorithm includes three key parts:
• Initialization by training a dense model. That is, Wdense := arg min
W L(M,W ) (7)
• Fix M and optimize W via approximately solving (using Wdense initialization): min W L̄(M,W ) s.t. E(M,W ) ≤ Ebudget (8)
• Fix W and optimize M by approximately solving : min M L̄(M,W ) s.t. ‖M‖0 ≤ q,M ∈ [0,1] (9)
After the initialization step (Line 1 in Algorithm 1), the training algorithm iteratively alternates between the second (Line 3-6 in Algorithm 1) and the third step (Line 8-13 in Algorithm 1) while gradually reducing the sparsity constraint q (Line 14 in Algorithm 1) until the training accuracy converges. Note that Equation (7) is the classic DNN training process, and solving Equation (9) involves only the well-known L0 norm projection P‖M‖0≤q(Q) := arg min‖M‖0≤q ‖M −Q‖
2. We thus focus on how Equation (8) is solved.
Optimizing Weight Matrix W To solve (8), one can use either projected gradient descent or projected stochastic gradient descent. The key difficulty in optimization lies on the projection step
PΩ(Ebudget)(Z) := arg min W∈Ω(Ebudget)
‖W − Z‖2 (10)
where Z could be W − η∇W L̄(W,M) or replacing ∇W L̄(W,M) by a stochastic gradient ∇̂W L̄(W,M). To solve the projection step, let us take a closer look at the constraint Equation (4). We rearrange the energy constraint Ω(Ebudget) into the following form with respect to W :{
W ∣∣ ∑ u∈U∪V α (u) 1 min(k, ‖W (u)‖0) + α(u)2 max(0, ‖W (u)‖0 − k) + α(u)3 ‖W (u)‖0 + α(u)4 ≤ Ebudget
} ,
(11)
whereW stacks all the variable {W (u)}u∈U∪V , and α(u)1 , α (u) 2 , α (u) 3 , α (u) 4 and k are properly defined
nonnegative constants. Note that α(u)1 ≤ α (u) 2 and k is a positive integer. Theorem 1 casts the energyconstrained projection problem to a 0/1 knapsack problem. The proof is included in the Appendix. Theorem 1. The projection problem in (10) is equivalent to the following 0/1 knapsack problem:
max ξ is binary
〈Z Z, ξ〉, s.t. 〈A, ξ〉 ≤ Ebudget − ∑
u∈U∪V α
(u) 4 , (12)
where Z stacks all the variables {Z(u)}u∈U∪V , A and ξ are of the same shape as Z, and the j-th element of A(u) for any u ∈ U ∪ V is defined by
A (u) j =
{ α
(u) 1 + α (u) 3 , if Z (u) j is among the top k elements of Z (u) in term of magnitude; α
(u) 2 + α (u) 3 , otherwise.
(13)
The optimal solution of (10) is Z ξ∗, where ξ∗ is the optimal solution to the knapsack problem (12).
The knapsack problem is NP hard. But it is possible to find approximate solution efficiently. There exists an approximate algorithm (Chan, 2018) that can find an −accurate solution in O(n log(1/ ) + −2.4) computational complexity. However, due to some special structure in our problem, there exists an algorithm that can find an an −accurate solution much faster. In the Appendix, we show that an (1 + )-approximate solution of problem (10) can be obtained in Õ(n + 1 2 ) time complexity (Õ omits logarithm), though the implementation of the algorithm is complicated. Here we propose an efficient approximate algorithm based on the “profit density.” The profit density of item j is defined as Z2j /Aj . We sort all items based on the “profit density” and iteratively select a group of largest items until the constraint boundary is reached. The detailed algorithm description is shown in the Appendix (Algorithm 2). This greedy approximation algorithm also admits nice property as shown in Theorem 2. Theorem 2. For the projection problem (10), the approximate solution W ′′ ∈ Ω(Ebudget) to the greedy approximation algorithm admits
‖W ′′−Z‖2 ≤ ‖PΩ(Ebudget)(Z)−Z‖ 2+Top‖W ′′‖0+1((Z Z) A)·min((max(A)−gcd(A)), R(W ′′)) (14) where max(A) is the maximal element of A, which is a nonnegative matrix defined in (13); Topk(·) returns the k-th largest element of ·; denotes the element-wise division. gcd(·) is the largest positive rational number that divides every argument, e.g., gcd(0, 1/3, 2/3) = 1/3. In (14), gcd(A) denotes the greatest common divisor of all elements in A2, and R(W ′′) denotes the remaining budget
R(W ′′) = ( Ebudget −
∑ u∈U∪V α (u) 4 − 〈A, supp(W ′′)〉
) .
2Here we assume A only contains rational numbers since gcd is used.
The formal proof is in the Appendix. W ′′ is the optimal projection solution to (10) if either of the following conditions holds:
1. (The remaining budget is 0.) It means that the greedy Algorithm 2 runs out of budget; 2. (The matrix A satisfies max(A) = gcd(A).) It implies that all elements in A have the same
value. In other words, the weights for all items are identical.
6 EVALUATION
The evaluations are performed on ImageNet (Deng et al., 2009), MNIST, and MS-Celeb-1M (Guo et al., 2016) datasets. For the MS-Celeb-1M, we follow the baseline setting reported in the original paper (Guo et al., 2016), which selects 500 people who have the most face images. We randomly sample 20% images as the validation set. We use both classic DNNs, including AlexNet (Krizhevsky et al., 2012) and LeNet-5 (LeCun et al., 1998), as well as recently proposed SqueezeNet (Iandola et al., 2016) and MobileNetV2 (Sandler et al., 2018).
We compare our method mainly with five state-of-art pruning methods: magnitude-based pruning (MP) (Han et al., 2015b;a), structured sparsity learning (SSL) (Wen et al., 2016), structured bayesian pruning (SBP) (Neklyudov et al., 2017), bayesian compression (BC) (Louizos et al., 2017) and energy-aware pruning (EAP) (Yang et al., 2017). Filter pruning methods (Li et al., 2016; He et al., 2017) require a sparsity ratio to be set for each layer, and these sparsity hyper-parameters will determine the energy cost of the DNN. Considering manually setting all these hyper-parameters in energy-constrained compression is not trivial, we directly compare against NetAdapt (Yang et al., 2018) which automatically searches such sparsity ratios and use filter pruning to compress DNN models. We implement an energy-constrained version of NetAdapt, which is originally designed to restrict the inference latency. Note that MobileNetv2 and SqueezeNet have special structures (e.g. residual block) that are not fully supported by NetAdapt. Thus, we show the results of NetAdapt only for AlexNet and LeNet-5.
A cc
ur ac
y D
ro p
(% )
Hyper-parameters In the experiment, we observe that knowledge distillation can improve the performance of MP and SSL, so we apply knowledge distillation to all methods including the baseline for a fair comparison. The results of removing knowledge distillation on MP and SSL are included in the Appendix. We choose the distillation weight λ = 0.5. EAP proposes an alternative way to solve the overfitting issue, so we directly use their results. For all the DNNs, we turn off the dropout layers since we find the knowledge distillation regularization will perform better. In all the experiments, we choose ∆q = 0.1|M | where |M | is the number of all mask elements. For optimizing W , we use a pre-trained dense initialization and update W by SGD with the learning rate η1 = 0.001 and weight decay 10−4. For optimizing input mask parameters M , we use the Adam optimizer (Kingma & Ba, 2014) with η2 = 0.0001 and weight decay 10−5(MNIST)/10−6(MS-Celeb-1M). To
stabilize the training process, we exponentially decay the energy budget to the target budget, and also use this trick in MP training (i.e. decaying the sparsity budget) for fair comparisons.
6.1 IMAGENET
We set an energy budget to be less than the minimal energy consumption among the three baseline methods. We use the same performance metric (i.e. top-5 test accuracy) and hardware parameters, i.e., eMAC, eDRAM, ecache, eRF, sh, sw, kW , kX , as described in the EAP paper (Yang et al., 2017). We initialize all the DNNs by a pre-trained dense model, which is also used to set up the knowledge distillation regularization. The top-5 test accuracies on the dense models are 79.1% (AlexNet), 80.5% (SqueezeNet), and 90.5% (MobileNetV2). We use batch size 128 and train all the methods with 30 epochs. For SSL and NetAdapt, we apply 20 additional epochs to achieve comparable results. We implement the projection operation PΩ(Ebudget) on GPU, and it takes < 0.2s to perform it in our experiments. The detailed wall-clock result is included in the Appendix.
Table 1 shows the top-5 test accuracy drop and energy consumption of various methods compared to the dense model. Our training framework consistently achieves a higher accuracy with a lower
energy consumption under the same energy budget. For instance on AlexNet, under a smaller energy budget (26% < 27%), our method achieves lower accuracy drop over EAP (0.5% vs. 0.8%). The advantage is also evident in SqueezeNet and MobileNetV2 that are already light-weight by design. EAP does not report data on MobileNetV2. We observe that weight sparsity is not a good proxy for energy consumption. Our method achieves lower energy consumption despite having higher density.
Figure 2 comprehensively compares our method with prior work. Solid markers represent DNNs trained from our framework under different energy budgets (x-axis). Empty markers represent DNNs produced from previous techniques. DNNs trained by our method have lower energies with higher accuracies (i.e., solid markers are closer to the bottom-left corner than empty markers). For instance on SqueezeNet, our most energy-consuming DNN still reduces energy by 23% while improves accuracy by 0.2% compared to EAP.
6.2 MNIST AND MS-CELEB-1M
0
10
20
30 0.0
1.0
1.0
MNIST and MS-Celeb-1M (Guo et al., 2016) represent datasets where inputs have regular patterns that are amenable to input masking. For instance, MS-Celeb-1M is a face image dataset and we use its aligned face images where most of the facial features are located in the center of an image. In such scenarios, training input masks lets us control the sparsity of the layer inputs and thus further reduce energy than merely pruning model parameters as in conventional methods. We do not claim that applying input mask is a general technique; rather, we demonstrate its effectiveness when applicable. We compare our method with MP and SSL using
LeNet-5 and MobileNetV2 for these two datasets, respectively. The pre-trained dense LeNet-5 has 99.3% top-1 test accuracy on MNIST, and the dense MobileNetV2 has 65.6% top-5 test accuracy on MS-Celeb-1M. EAP does not report data on these two datasets. Similar to the evaluation on ImageNet, we set the energy budget to be lower than the energy consumptions of MP and SSL. We use batch size 32 on MNIST and 128 on MS-Celeb-1M, and number of epochs is set the same as the ImageNet experiments. Table 2 compares the energy consumption and accuracy drop. Our method consistently achieves higher accuracy with lower energy under the same or even smaller energy budget. We visualize the sparsity of the learned input masks in Figure 3.
7 CONCLUSION
This paper demonstrates that it is possible to train DNNs with quantitative energy guarantees in an end-to-end fashion. The enabler is an energy model that relates the DNN inference energy to the DNN parameters. Leveraging the energy model, we augment the conventional DNN training with an energy-constrained optimization process, which minimizes the accuracy loss under the constraint of a given energy budget. Using an efficient algorithm, our training framework generates DNNs with higher accuracies under the same or lower energy budgets compared to prior art.
APPENDICES
DETAIL OF ENERGY CONSUMPTION FOR DATA ACCESS
FULLY CONNECTED LAYER
To multiply X(v) ∈ Rc and W (v) ∈ Rc×d, each nonzero element of W (v) is used once but loaded three times, once each from DRAM, cache and RF, respectively. Thus, the number of DRAM, cache, and RF accesses for weight matrix W (v) is:
NweightsDRAM = N weights cache = N weights RF = ‖W (v)‖0. (15)
Input X(v) is fed into the systolic array dd/swe times, where sw denotes the the systolic array width. Thus, the number of cache accesses for X(v) is:
N inputcache = dd/swe‖X (v)‖0. (16)
Let kX be the cache size for inputX(v). If kX is less than ‖X(v)‖0, there are ‖X(v)‖0−kX elements that must be reloaded from DRAM every time. The rest kX elements need to load from only DRAM once as they will always reside in low-level memories. Thus, there are dd/swe(‖X(v)‖0− kX) + kX DRAM accesses for X(v). In addition, the output vector of the FC layer (result of X(v)W (v)) needs to be written back to DRAM, which further incurs d DRAM accesses. Thus, The total number of DRAM accesses to retrieve X(v) is:
N inputDRAM = dd/swemax(0, ‖X (v)‖0 − kX) + min(kX , ‖X(v)‖0) + d. (17)
Each input element is loaded from RF once for each MAC operation, and there are two RF accesses incurred by accumulation for each MAC operation (one read and one write). Thus, the total number of RF accesses related to X(v) is:
N inputRF = d‖X (v)‖0 + 2‖W (v)‖0. (18)
In summary, the data access energy of a fully connected layer v is expressed as follows, in which each component follows the derivations in Equation (15) through Equation (18):
E (v) data = eDRAM(N input DRAM +N weights DRAM ) + ecache(N input cache +N weights cache ) + eRF(N input RF +N weights RF ). (19)
CONVOLUTION LAYER
Similar to a FC layer, the data access energy of a CONV layer u is modeled as:
E (u) data = eDRAM(N input DRAM +N weights DRAM ) + ecache(N input cache +N weights cache ) + eRF(N input RF +N weights RF ). (20)
The notations are the same as in FC layer. We now show how the different components are modeled.
To convolve W (u) ∈ Rd×c×r×r with X(u) ∈ Rc×h×w, each nonzero element in the weight tensor W (u) is fed into the systolic array dh′w′/she times, where sh denotes the height of the systolic array and h′ and w′ are dimension parameters of X(u). Thus,
Nweightscache = dh ′w′/she‖W (u)‖0. (21)
Similar to the FC layer, the number of RF accesses for W (u) during all the MAC operations is:
NweightsRF = h ′w′‖W (u)‖0. (22)
Let kW be the cache size for the weight matrix W (u). If ‖W (u)‖0 > kW , there are kW nonzero elements of W (u) that would be accessed from DRAM only once as they would reside in the cache, and the rest ‖W (u)‖0 − kW elements would be accessed from DRAM by dh′w′/she times. Thus,
NweightsDRAM = dh ′w′/shemax(0, ‖W (u)‖0 − kW ) + min(kW , ‖W (u)‖0). (23)
Let kX be the cache size for input X(u). If every nonzero element in X(u) is loaded from DRAM to cache only once, N inputDRAM would simply be ‖X(u)‖0. In practice, however, the cache size kX is much
smaller than ‖X(u)‖0. Therefore, some portion of X(u) would need to be re-loaded. To calculate the amount of re-loaded DRAM access, we observe that in real hardware X(u) is loaded from DRAM to the cache at a row-granularity.
When the input X(u) is dense, there are at least cw elements loaded at once. In this way, the cache would first load bkX/(cw)c rows from DRAM, and after the convolutions related to these rows have finished, the cache would load the next bkX/(cw)c rows in X(u) for further processing. The rows loaded in the above two rounds have overlaps due to the natural of the convolution operation. The number of overlaps Roverlap is dh/(bkX/cwc− r+ s)e− 1, and each overlap has cw(r− s) elements. Thus, Roverlap × cw(r − s) elements would need to be reloaded from DRAM. Finally, storing the outputs of the convolution incurs an additional dh′w′ DRAM writes. Summing the different parts together, the upper bound (X(u) is dense) number of DRAM accesses for X(u) is:
N inputDRAM = ‖X (u)‖0 + (dh/(bkX/cwc − r + s)e − 1)cw(r − s) + dh′w′. (24)
When the input X(u) is not dense, we can still count the exact number of elements in the overlaps Noverlap of the consecutive loading rounds, so we have:
N inputDRAM = ‖X (u)‖0 +Noverlap + dh′w′. (25)
Every nonzero element in the unfolded input X̄(u) would be fed into the systolic array dd/swe times (for grouped convolution, this number is divided by the number of groups). Each MAC operation introduces 2 RF accesses. Thus,
N inputcache = dd/swe‖X̄‖0, N input RF = d‖X̄ (u)‖0 + 2h′w′‖W (u)‖0. (26)
PROOF TO THEOREM 1
Proof. First, it is easy to see that (10) is equivalent to the following problem
max ξ is binary
〈Z Z, ξ〉, s.t. ξ ∈ Ω(Ebudget). (27)
Note that if the optimal solution to problem (27) is ξ̄, the solution to problem (10) can be obtained by Z ξ̄; given the solution to (10), the solution to (27) can be obtained similarly. Therefore, we only need to prove that (27) is equivalent to (12). Meeting the following two conditions guarantees that (27) and (12) are equivalent since they have identical objective functions:
1. Any optimal solution of problem (27) is in the constraint set of problem (12);
2. Any optimal solution of problem (12) is in the constraint set of problem (27).
Let us prove the first condition. Let ξ̂ be the optimal solution to (27). Then for any u ∈ U ∪ V , the elements of Z(u) selected by ξ̂(u) are the largest (in terms of magnitude) ‖ξ̂(u)‖0 elements of Z(u); otherwise there would exist at least one element that can be replaced by another element with a larger magnitude, which would increase the objective value in (27). Since ξ̂ ∈ Ω(Ebudget), according to the definition of A in (13), ξ̂ satisfies the constraint of (12).
Let us now prove the second condition. The definition of A in (13) show that there could at most be two different A(u) values for each element u, and the largest k elements in Z(u) always have the smaller value, i.e., α(u)1 + α (u) 3 . Let ξ̄ be the optimal solution to the knapsack problem (12). For any u ∈ U ∪ V , the elements selected by ξ̄(u) are also the largest elements in Z(u) in terms of magnitude; otherwise there would exist an element Z(u)j that has a larger magnitude but corresponds to a smaller A (u) j ((13) shows that A (u) i ≥ A (u) j when |Z (u) i | ≤ |Z (u) j |). This would contradict the fact that ξ̄ is optimal. In addition, ξ̄ meets the constraint in problem (12). Therefore, ξ̄ ∈ Ω(Ebudget). It completes the proof.
AN (1 + )-APPROXIMATE SOLUTION FOR PROBLEM (10)
Theorem 3. For the projection problem (10), there exists an efficient approximation algorithm that has a computational complexity of O (( n+ (|U |+|V |) 3
2
) log nmax(A)min(A+) ) and generates a solution
W ′ ∈ Ω(Ebudget) that admits
‖W ′ − Z‖2 ≤ ∥∥∥∥PΩ( Ebudget
1+O( )
)(Z)− Z ∥∥∥∥2 , (28)
where min(A+) is the minimum of the positive elements in A.
|U | and |V | denote the number of CONV and FC layers, respectively. They are very small numbers that can be treated as constants here. Thus, the computational complexity for our problem is reduced to Õ(n+ 1 2 ), where Õ omits the logarithm term. In the following, we will prove this theorem by construction.
PROBLEM FORMULATION
Definition 1. Inverted knapsack problem. Given n objects I := {(vi, wi)}ni=1 each with weight wi > 0, and value vi ≥ 0, define hI(x) to be the smallest weight budget to have the total value x:
hI(x) := min ξ∈{0,1}n n∑ i=1 wiξi (29)
s. t. n∑ i=1 viξi ≥ x
We are more interested in the case that the weights of n objects are in m clusters, i.e. there are only m distinct weights,
|{wi}ni=1| = m.
In our case, m is proportional to the number of layers in DNN, and n is the number of all the learnable weights in W , so m n. Definition 2. Inverse of step function. The inverse of the step function f is defined as the maximal x having the function value y:
f−1(y) := max f(x)≤y x (30)
Observation The inverse of the step function h−1I (y) is just the maximal value we can get given the weight budget, i.e. the original knapsack problem:
h−1I (y) = max ξ∈{0,1}n n∑ i=1 viξi, s. t. n∑ i=1 wiξi ≤ y. (31)
Observation Given a step function with l breakpoints, its inverse can be generated with O(l) time complexity, and vice versa.
Thus, given the step function of hI in (29) which has l breakpoints, we can get h−1I (i.e. the original knapsack problem) within O(l) time complexity. Definition 3. w-uniform. Step function f is w-uniform if the ranges of f is from −∞, 0, w, 2w, ..., lw.
Observation If all the objects in I have the same weight w, i.e. m = 1, then the function hI(x) is nondecreasing and w-uniform. Moreover, its breakpoints are:
(0, 0), (v1, w), (v1 + v2, 2w), ..., ( n∑ i=1 vi, nw ) ,
if the objects’ indices follows the decreasing order in terms of the values, i.e. v1 ≥ v2 ≥ ... ≥ vn. Thus we can get all possible function values of hI(x):
hI(x) = kw, ∀x ∈ ( k−1∑ i=1 vi, k∑ i=1 vi ] .
Definition 4. (min, +)-convolution. For functions f, g, the (min, +)-convolution is:
(f ⊕ g)(x) = min x′
(f(x′) + g(x− x′)).
Observation If object sets I1 ∩ I2 = ∅, then
fI1∪I2 = fI1 ⊕ fI2 .
Observation The inverse of (min, +)-convolution between w-uniform function f and w-uniform function g is the (max, +)-convolution between f−1 and g−1:
(f ⊕ g)−1(y) = max y′∈{0,1w,...,lw} (f−1(y′) + g−1(y − y′)). (32)
Lemma 4. For any f and g nonnegative step functions, given an arbitrary number b, we always have
min{f ⊕ g, b} = min{min{f, b} ⊕min{g, b}, b} (33)
Proof. Given any x, let z ∈ Arg minx′ f(x′) + g(x − x′) and z̄ ∈ Arg minx′ min(f(x′), b) + min(g(x− x′), b), so we have (f ⊕ g)(x) = f(z) + g(x− z) and (min{f, b} ⊕min{g, b})(x) = min(f(z̄), b) + min(g(x− z̄), b). Consider the following cases:
1. (f ⊕ g)(x) ≥ b. In this case, we claim that (min{f, b} ⊕min{g, b})(x) ≥ b. We prove it by contradiction. Suppose (min{f, b} ⊕min{g, b})(x) < b which implies min(f(z̄), b) + min(g(x − z̄), b) < b. Because both f and g are nonnegative, we have f(z̄) < b and g(x − z̄) < b which imply min(f(z̄), b) + min(g(x − z̄), b) = f(z̄) + g(x − z̄) < b, However, this contradicts (f ⊕ g)(x) ≥ b. Therefore, we have min((f ⊕ g)(x), b) = min((min{f, b} ⊕min{g, b})(x), b) = b.
2. (f ⊕ g)(x) < b. In this case, we have f(z) < b and g(x − z) < b, so min(f(z̄), b) + min(g(x− z̄), b) ≤ min(f(z), b)+min(g(x−z), b) = f(z)+g(x−z) = (f ⊕g)(x) < b. Since both f and g are nonnegative, we have f(z̄) < b and g(x − z̄) < b which imply min(f(z̄), b) + min(g(x − z̄), b) = f(z̄) + g(x − z̄) ≥ (f ⊕ g)(x). Therefore, we have min(f(z̄), b) + min(g(x − z̄), b) = f(z) + g(x − z) ⇔ (min{f, b} ⊕ min{g, b})(x) = (f ⊕ g)(x).
EFFICIENCY OF (MIN, +)-CONVOLUTION
Lemma 5. Let f and g be nondecreasing w-uniform functions with O(l) breakpoints, the (min, +)-convolution f ⊕ g (having O(l) breakpoints) can be generated with O(l2) time complexity.
Proof. Firstly, we compute the inverse representation of f and g, i.e. compute f−1 and g−1 from Equation (30). The inverse representation can be computed in O(l) time (proportional to the number of breakpoints). From Equation (32), we can compute the inverse of f ⊕ g. For each y ∈ {0, 1w, ..., 2lw}, function (f ⊕ g)−1(y) can be computed in O(l) time by brute force. Thus a total O(l2) is enough to get (f ⊕ g)−1 which has O(l) breakpoints. We can get f ⊕ g via (f ⊕ g)−1 by the inverse definition (30) in O(l) time.
Lemma 6. Let f and g be nondecreasing step functions with l breakpoints in total, min{f ⊕ g, b} can be approximated by a step function φb with O(l + 1 2 ) complexity and 2 b additive error, i.e. min{f ⊕ g, b} ≤ φb ≤ min{f ⊕ g, b}+ 2 b. The resultant function φb has O(1/ ) breakpoints.
Proof. We can construct ( b)-uniform functions f ′b, g ′ b which have d1/ e breakpoints:
f ′b(x) =
⌈ min(b, f(x))
b
⌉ b, g′b(x) = ⌈ min(b, g(x))
b
⌉ b.
This needsO(l) computational complexity. From Lemma 5, we can compute f ′b⊕g′b withO( 1 2 ) time complexity and φb = min{f ′b ⊕ g′b, b} has O(1/ ) breakpoints. Because f ′b and g′b are constructed by ceiling min{f, b} and min{g, b}, we have:
min{f, b} ⊕min{g, b} ≤ f ′b ⊕ g′b ≤ min{f, b} ⊕min{g, b}+ 2 b,
which implies
min{min{f, b} ⊕min{g, b}, b} ≤ min{f ′b ⊕ g′b, b} ≤ min{min{f, b} ⊕min{g, b}, b}+ 2 b.
From Lemma 4, we know that min{min{f, b}⊕min{g, b}, b} = min{f ⊕ g, b}, so it completes the proof.
Lemma 7. Let f1, f2, ..., fm be nondecreasing step functions with l breakpoints in total, min{f1 ⊕ f2 ⊕ ... ⊕ fm, b} can be approximated by a step function ψb with O(l + m/ 2) computational complexity and m b additive error. The resultant function ψb has O(1/ ) breakpoints.
Proof. From Lemma 6, we have shown the case m = 2. For general m > 2, we can construct a binary tree to approximate pairs of functions, e.g., if m = 4, we can firstly approximate ψ(1) ≈ min{f1 ⊕ f2, b}, and ψ(2) ≈ min{f3 ⊕ f4, b}, then approximate ψ(3)b ≈ min{ψ(1) ⊕ ψ(2), b}. By this way, we construct a binary tree which has O(logm) depth and O(m) nodes. In the beginning, we use ceil function to construct m new b-uniform functions:
f ′i,b(x) =
⌈ min(b, fi(x))
b
⌉ b,∀i ∈ {1, 2, ...,m}.
Then we can use the binary tree to “merge” all the m functions in pairs, via O(logm) iterations. Without loss of generality, we assume m is a power of two. We can recursively merge t functions into t/2 functions:
1. Initialize t = m, g′i,b = f ′ i,b,∀i ∈ {1, ..., t}.
2. Reassign g′i,b = min{g′2i−1,b ⊕ g′2i,b, b},∀i ∈ {1, ..., t/2}. According to Lemma 6, the number of break points of min{g′2i−1,b ⊕ g′2i,b, b} is still O(1/ ).
3. t = t/2. If t > 1, go back to Step 2.
4. Return ψb := min{g′1,b, b}.
For this binary tree, functions of the bottom leaf nodes have b additive error, and every (min, +)- convolution f ′ ⊕ g′ will accumulate the additive error from the two functions f ′ and g′. The root node of the binary tree will accumulate the additive errors from all the m leaf nodes, thus the resultant function ψb ≤ min{f1⊕ ...⊕ fm, b}+m b. For the computational complexity, initializing f ′i,b takes O(l), Step 1 takes O(l), Step 2 and 3 take O(m/ 2) (since there are O(m) nodes in the binary tree), and Step 4 takes O(m/ ). Therefore, there is O(l +m/ 2) in total.
Lemma 8. For the inverted knapsack problem defined in Equation (29), if all the n objects can be separated into m groups I1, ..., Im which have m distinct weights, there exists an approximate algorithm with computational complexity O((n+ m 3
2 ) log nmax(w) min(w) ) which can approximate hI by
h̃I : hI(x) ≤ h̃I(x) ≤ (1 +O( ))hI(x), ∀x.
Proof. Firstly, the step function hIi ,∀i ∈ {1, 2, ...,m} can be easily generated within O(n log n) by sorting the objects of each group according to their values (in descending order). From the definition of (min, +)-convolution, we know that hI = hI1 ⊕ ... ⊕ hIm . Let us construct an algorithm to approximate hI :
1. Construct a set B := {2inmax(w) ∈ [min(w), nmax(w)]; i ∈ Z≤0}, where min(w) and max(w) are the minimum and maximum weight of items respectively, and Z≤0 is the nonpositive integer set. We have |B| = O(log nmax(w)min(w) ).
2. For every b ∈ B, construct ψb to approximate min{hI1 ⊕ ...⊕ hIm , b} based on Lemma 7.
3. Construct function h̃−1I :
h̃−1I (y) = { ψ−1b (y), if b/2 < y ≤ b and y > min(B); ψ−1min(B)(y), if y ≤ min(B).
where min(B) is the minimum element in B. The resultant function h̃−1I (or h̃I ) has at most O( 1 log
nmax(w) min(w) ) breakpoints.
4. Compute the original function h̃I from h̃−1I .
According to the above procedure, for any hI(x) ∈ (b/2, b], h̃I(x) approximate hI(x) with additive error O(m b), so we have hI(x) ≤ h̃I(x) ≤ (1 + O(m ))hI(x). The algorithm takes O((n + m/ 2) log nmax(w)min(w) ), if we require the approximation factor to be 1 +O( ), i.e.,
hI(x) ≤ h̃I(x) ≤ (1 +O( ))hI(x), ∀x, we need
O ( (n+m3/ 2) log nmax(w)
min(w) ) time complexity.
Theorem 9. For the knapsack problem defined in Equation (12), if all the n objects have m distinct weights, there exists an approximate algorithm with computational complexity O((n + m3
2 ) log nmax(w) min(w) ) to generate a function h̃ −1 I satisfying:
h−1I
( y
1 +O( )
) ≤ h̃−1I (y) ≤ h −1 I (y), ∀y.
Proof. From Lemma 8, we have h̃I(x) ≤ (1 +O( ))hI(x) which implies
{x | (1 +O( ))hI(x) ≤ y} ⊆ {x | h̃I(x) ≤ y}. So
max hI(x)≤y/(1+O( )) x ≤ max h̃I(x)≤y
x⇔ h−1I (
y
1 +O( )
) ≤ h̃−1I (y).
Similarly, we can get {x | h̃I(x) ≤ y} ⊆ {x | hI(x) ≤ y} from Lemma 8, so we have
max hI(x)≤y x ≥ max h̃I(x)≤y x⇔ h−1I (y) ≥ h̃ −1 I (y).
Let I be the set of objects whose weights are nonzero elements in A and values are the corresponding elements in Z Z, i.e. I+ = {(Z2i , Ai) | ∀i ∈ {1, 2, ..., |A|} and Ai > 0}, ξ̃+ be the solution corresponding to h̃−1I (Ebudget− ∑ u∈U∪V α (u) 4 ). Let ξ̃Ic+ = 1 and ξ̃I+ = ξ̃
+, where Ic+ = {(Z2i , Ai) | ∀i ∈ {1, 2, ..., |A|} and Ai = 0} is the complement of I+. Here we have m ≤ 2|U |+ |V | distinct values in A. According to Theorem 9, we have 〈Z Z, ξ̃〉 ≥ maxξ〈Z Z, ξ〉, s.t. 〈A, ξ〉 ≤ Ebudget− ∑ u∈U∪V α (u) 4
1+O( ) , which implies
〈Z Z, ξ̃〉 ≥ max ξ∈Ω(Ebudget/(1+O( ))) 〈Z Z, ξ〉.
From Theorem 9, we can directly get Theorem 3.
Algorithm 2: Greedy Algorithm to Solve Problem (12). Input: Z,A,Ebudget, {α(u)}u∈U∪V as in (12). Result: Greedy solution ξ̃ for problem (12).
1 Initialize b = 0, ξ = 0. 2 Generate the profit density δ:
δj =
{ (Zj)
2/Aj , if Aj > 0; ∞, if Aj = 0.
3 Sort δ, let I be the indices list of the sorted δ (in descending order). 4 foreach index j ∈ I do 5 b = b+Aj ; 6 If b > Ebudget − ∑ u∈U∪V α (u) 4 , exit loop; 7 ξj = 1; 8 end 9 ξ̃ = ξ.
PROOF TO THEOREM 2
Proof. From Theorem 1, we know the original projection problem (10) is equivalent to the knapsack problem (12). So proving the inequality (14) is equivalent to proving
〈Z Z, ξ̃〉 ≥ 〈Z Z, ξ∗〉 − Top‖ξ̃‖0+1((Z Z) A) ·R(ξ̃) (34) and
〈Z Z, ξ̃〉 ≥ 〈Z Z, ξ∗〉 − Top‖ξ̃‖0+1((Z Z) A) · (max(A)− gcd(A)), (35)
where ξ̃ is the greedy solution of knapsack problem corresponding to W ′′, and ξ∗ is the exact solution of knapsack problem corresponding to PΩ(Ebudget)(Z), i.e.,
W ′′ = Z ξ̃, PΩ(Ebudget)(Z) = Z ξ ∗.
Firstly, let us prove the inequality (35). If we relax the values of ξ to be in the range [0, 1] instead of {0, 1}, the discrete constraint is removed so that the constraint set becomes
∆ = { ξ | 0 ≤ ξ ≤ 1 and 〈A, ξ〉 ≤ Ebudget −
∑ u∈U∪V α (u) 4
} .
So the 0/1 knapsack problem is relaxed as a linear programming. This relaxed problem is called fractional knapsack problem, and there is a greedy algorithm (Dantzig, 1957) which can exactly solve the fractional knapsack problem. Slightly different from our Algorithm 2, the greedy algorithm for the fractional knapsack can select a fraction of the item, so its remaining budget is always zero. The optimal objective value of the fractional knapsack is
max ξ∈∆ 〈Z Z, ξ〉 = 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) ·R(ξ̃).
Since the constraint set of the fractional knapsack problem is a superset of the constraint of the original knapsack problem, we have 〈Z Z, ξ∗〉 ≤ max0≤ξ≤1〈Z Z, ξ〉, that leads to inequality (34). Secondly, we show that the inequality (35) is also true. Since all the coefficients in A are multiples of gcd(A), we can relax the original 0/1 knapsack problem in this way: for each item, split them to several items whose coefficients in the constraint are gcd(A), and the coefficients in the objective function are split equally. For the j-th item, the coefficient in the constraint is Aj and the coefficient in the objective function is (Z Z)j . It will be split into Aj/ gcd(A) items, and the j-th item is associated with coefficient (Z2j /Aj) · gcd(A) in the objective function. This relaxation gives us a new 0/1 knapsack problem, where all the items have the same coefficient in the constraint, so the optimal solution is just selecting the ones with the largest coefficients in the objective function. We can formulate this problem as a relaxed knapsack problem by replacing the constraint of ξ into ξ ∈ Γ, where
Γ = { ξ | for all j, ξj is a multiple of gcd(A)
Aj , 0 ≤ ξj ≤ 1, and 〈A, ξ〉 ≤ Ebudget − ∑ u∈U∪V α (u) 4
} .
All the elements of the solution are either 0 or 1 except the last picked one which corresponds to Top‖ξ̃‖0+1((Z Z) A). Let the (‖ξ̃‖0 + 1)-th largest element in (Z Z) A be indexed by t. We have 0 ≤ ξ̃t ≤ 1− gcd(A)/At. Therefore, comparing with the original 0/1 knapsack problem, we have
max ξ∈Γ 〈Z Z, ξ〉 ≤ 〈Z Z, ξ̃〉+ (Z Z)t · (1− gcd(A)/At)
= 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) ·At · (1− gcd(A)/At)
= 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) · (At − gcd(A))
≤ 〈Z Z, ξ̃〉+ Top‖ξ̃‖0+1((Z Z) A) · (max(A)− gcd(A))
Since {ξ | ξ is binary} ⊆ Γ, we have 〈Z Z, ξ∗〉 ≤ maxξ∈Γ〈Z Z, ξ〉. So we have the inequality (35).
SUPPLEMENTARY EXPERIMENT RESULTS
RESULTS OF BASELINE WITHOUT KNOWLEDGE DISTILLATION
Table 3 shows the energy and accuracy drop results of the baseline methods MP and SSL when the knowledge distillation is removed from their loss function. By using knowledge distillation, the results in Table 1 are much better. Therefore, we use knowledge distillation in all the experiments when it is applicable.
ENERGY-CONSTRAINED PROJECTION EFFICIENCY
The projection operation PΩ(Ebudget) in Algorithm 1 can be implemented on GPU. We measured its wall-clock time on a GPU server (CPU: Xeon E3 1231-v3, GPU: GTX 1080 Ti), and the result is shown in Table 4 (the time is averaged over 100 iterations). | 1. What is the focus of the paper regarding deep neural networks?
2. What are the strengths of the proposed method in terms of energy consumption and accuracy?
3. What are the concerns regarding the industry-strength DNN hardware simulator ScaleSim?
4. How does the reviewer assess the assumption about skipping accessed data in hardware?
5. What are some minor issues with the paper's content? | Review | Review
The paper is dedicated to energy-based compression of deep neural networks. While most works on compression are dedicated to decreasing the number of parameters or decreasing the number of operations to speed-up or reducing of memory footprint, these approaches do not provide any guarantees on energy consumption. In this work the authors derived a loss for training NN with energy constraints and provided an optimization algorithm for it. The authors showed that the proposed method achieves higher accuracy with lower energy consumption given the same energy budget. The experimental results are quite interesting and include even highly optimized network MobileNetV2.
Several questions and concerns.
‘Our energy modeling results are validated against the industry-strength DNN hardware simulator ScaleSim’. Could the authors please elaborate on this sentence?
One of the main assumptions is the following. If the value of the data is zero, the hardware can skip accessing the data. As far as I know, this is a quite strong assumption, that is not supported by many architectures. How do the authors take into account overhead of using sparse data formats in such hardware in their estimations? Is it possible to simulate such behavior in ScaleSim? Moreover, in many modern systems DRAM can only be read in chunks. Therefore it can decrease number of DRAM accesses in (4).
Small typos and other issues:
Page 8. ‘There exists an algorithm that can find an an \epsilon’
Page 8.’ But it is possible to fan approximate solution’
Page 4. It is better to put the sentence ‘where s convolutional stride’ after (2).
In formulation of the Theorem 3, it is better to explicitly state that A contains rational numbers only since gcd is used.
Overall, the paper is written clearly and organized well, contains interesting experimental and theoretical results. |
ICLR | Title
A Scalable Laplace Approximation for Neural Networks
Abstract
We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture.
1 INTRODUCTION
Neural networks are most commonly trained in a maximum a posteriori (MAP) setting, which only yields point estimates of the parameters, ignoring any uncertainty about them. This often leads to overconfident predictions, especially in regimes that are weakly covered by training data or far away from the data manifold. While the confidence of wrong predictions is usually irrelevant in a research context, it is essential that a Machine Learning algorithm knows when it does not know in the real world, as the consequences of mistakes can be fatal, be it when driving a car or diagnosing a disease.
The Bayesian framework of statistics provides a principled way for avoiding overconfidence in the parameters by treating them as unknown quantities and integrating over all possible values. Specifically, for the prediction of new data under a model, it fits a posterior distribution over the parameters given the training data and weighs the contribution of each setting of the parameters to the prediction by the probability of the data under those parameters times their prior probability. However, the posterior of neural networks is usually intractable due to their size and nonlinearity.
There has been previous interest in integrating neural networks into the Bayesian framework (MacKay, 1992; Hinton & Van Camp, 1993; Neal, 1993; Barber & Bishop, 1998), however these approaches were designed for small networks by current standards. Recent adaptations to architectures of modern scale rely on crude approximations of the posterior to become tractable. All of (Graves, 2011; Hernández-Lobato & Adams, 2015; Blundell et al., 2015) assume independence between the individual weights. While they achieve good results on small datasets, this strong restriction of the posterior is susceptible to underestimating the uncertainty, in particular when optimising the variational bound. The approach in (Gal & Ghahramani, 2016) requires the use of certain stochastic regularisers which are not commonly present in most recent architectures. Furthermore, it is not clear if the approximate posterior defined by these regularisers is a good fit to the true posterior.
Recent work on second-order optimisation of neural networks (Martens & Grosse, 2015; Botev et al., 2017) has demonstrated that the diagonal blocks of the curvature can be well approximated by a Kronecker product. We combine this insight with the idea of modelling the posterior over the weights as a Gaussian, using a Laplace approximation (MacKay, 1992) with Kronecker factored covariance matrices. This leads to a computationally efficient matrix normal posterior distribution
∗Corresponding author: j.ritter@cs.ucl.ac.uk
(Gupta & Nagar, 1999) over the weights of every layer. Since the Laplace approximation is applied after training, our approach can be used to obtain uncertainty estimates from existing networks.
2 THE CURVATURE OF NEURAL NETWORKS
Our method is inspired by recent Kronecker factored approximations of the curvature of a neural network (Martens & Grosse, 2015; Botev et al., 2017) for optimisation and we give a high-level review of these in the following. While the two methods approximate the Gauss-Newton and Fisher matrix respectively, as they are guaranteed to be positive semi-definite (p.s.d.), we base all of our discussion on the Hessian in order to be as general as possible.
2.1 NEURAL NETWORK NOTATION
We denote a feedforward network as taking an input a0 = x and producing an output hL. The intermediate representations for layers λ = 1, ..., L are denoted as hλ = Wλaλ−1 and aλ = fλ(hλ). We refer to aλ as the activations, and hλ as the (linear) pre-activations. The bias terms are absorbed into the Wλ by appending a 1 to each aλ. The network parameters are optimised w.r.t. an error function E(y, hL) for targets y. Most commonly used error functions, such as squared error and categorical cross-entropy, can be interpreted as exponential family negative log likelihoods − log p(y|hL).
2.2 KRONECKER FACTORED SECOND-ORDER OPTIMISATION
Traditional second-order methods use either the Hessian matrix or a positive semi-definite approximation thereof to generate parameter updates of the form ∆ = C−1g, where C is the chosen curvature matrix and g the gradient of the error function parameterised by the network. However, this curvature matrix is infeasbile to compute for modern neural networks as their number of parameters is often in the millions, rendering the size of C of the order of several terabytes.
Recent work (Martens & Grosse, 2015; Botev et al., 2017) exploits that, for a single data point, the diagonal blocks of these curvature matrices are Kronecker factored:
Hλ = ∂2E
∂ vec(Wλ)∂ vec(Wλ) = Qλ ⊗Hλ (1)
where Hλ is the Hessian w.r.t. the weights in layer λ. Qλ = aλ−1a T λ−1 denotes the covariance of the incoming activations aλ−1 and Hλ = ∂ 2 E
∂hλ∂hλ the pre-activation Hessian, i.e. the Hessian of the
error w.r.t. the linear pre-activations hλ in a layer. We provide the derivation for this result as well as the recursion for calculatingH in Appendix A. The Kronecker factorisation holds two key advantages: the matrices that need be computed and stored are much smaller — if we assume all layers to be of dimensionality D, the two factors are each of sizeD2, whereas the full Hessian for the weights of only one layer would haveD4 elements. Furthermore, the inverse of a Kronecker product is equal to the Kronecker product of the inverses, so it is only necessary to invert those two moderately sized matrices.
In order to maintain this structure over a minibatch of data, all Kronecker factored second-order methods make two core approximations: First, they only model the diagonal blocks corresponding to the weights of a layer, such that the curvature decomposes into L independent matrices. Second, they assume Qλ andHλ to be independent. This is in order to maintain the Kronecker factorisation in expectation, i.e. E [Qλ ⊗Hλ] ≈ E [Qλ] ⊗ E [Hλ], since the expectation of a Kronecker product is not guaranteed to be Kronecker factored itself.
The main difference between the Kronecker factored second-order optimisers lies in how they efficiently approximate E [Hλ]. For exact calculation, it would be necessary to pass back an entire matrix per data point in a minibatch, which imposes infeasible memory and computational requirements. KFRA (Botev et al., 2017) simply passes back the expectation at every layer, while KFAC (Martens & Grosse, 2015) utilises the Fisher identity to only propagate a vector rather than a matrix, approximating the Kronecker factors with a stochastic rank-one matrix for each data point.
The diagonal blocks of the Hessian and Gauss-Newton matrix are equal for neural networks with piecewise linear activation functions (Botev et al., 2017), thus both methods can be used to directly approximate the diagonal blocks of the Hessian of such networks, as the Gauss-Newton and Fisher are equivalent for networks that parameterise an exponential family log likelihood.
3 A SCALABLE LAPLACE APPROXIMATION FOR NEURAL NETWORKS
3.1 THE LAPLACE APPROXIMATION
The standard Laplace approximation is obtained by taking the second-order Taylor expansion around a mode of a distribution. For a neural network, such a mode can be found using standard gradientbased methods. Specifically, if we approximate the log posterior over the weights of a network given some data D around a MAP estimate θ∗, we obtain:
log p(θ|D) ≈ log p(θ∗|D)− 1 2 (θ − θ∗)TH̄(θ − θ∗) (2)
where θ = [vec(W1), ..., vec(WL)] is the stacked vector of weights and H̄ = E [H] the average Hessian of the negative log posterior1. The first order term is missing because we expand the function around a maximum θ∗, where the gradient is zero. If we exponentiate this equation, it is easy to notice that the right-hand side is of Gaussian functional form for θ, thus we obtain a normal distribution by integrating over it. The posterior over the weights is then approximated as Gaussian:
θ ∼ N (θ∗, H̄−1) (3)
assuming H̄ is p.s.d. We can then approximate the posterior mean when predicting on unseen data D∗ by averaging the predictions of T Monte Carlo samples θ(t) from the approximate posterior:
p(D∗|D) = ∫ p(D∗|θ)p(θ|D)dθ ≈ 1
T T∑ t=1 p(D∗|θ(t)) (4)
3.2 DIAGONAL LAPLACE APPROXIMATION
Unfortunately, it is not feasible to compute or invert the Hessian matrix w.r.t. all of the weights jointly. An approximation that is easy to compute in modern automatic differentiation frameworks is the diagonal of the Fisher matrix F , which is simply the expectation of the squared gradients:
H ≈ diag(F ) = diag(E [ ∇θ log p(y|x)∇θ log p(y|x) T ] ) = diag(E [ (∇θ log p(y|x)) 2 ] ) (5)
where diag extracts the diagonal of a matrix or turns a vector into a diagonal matrix. Such diagonal approximations to the curvature of a neural network have been used successfully for pruning the weights (LeCun et al., 1990) and, more recently, for transfer learning (Kirkpatrick et al., 2017).
This corresponds to modelling the weights with a Normal distribution with diagonal covariance:
vec(Wλ) ∼ N (vec(W ∗ λ ),diag(Fλ) −1) for λ = 1, . . . , L (6)
Unfortunately, even if the Taylor approximation is accurate, this will place significant probability mass in low probability areas of the true posterior if some weights exhibit high covariance.
1The average Hessian is typically scaled by the number of data points N . In order to keep the notation uncluttered, we develop our basic methods in terms of the average Hessian and discuss the scaling separately.
3.3 KRONECKER FACTORED LAPLACE APPROXIMATION
So while it is desirable to model the covariance between the weights, some approximations are needed in order to remain computationally efficient. First, we assume the weights of the different layers to be independent. This corresponds to the block-diagonal approximation in KFAC and KFRA, which empirically preserves sufficient information about the curvature to obtain competitive optimisation performance. For our purposes this means that our posterior factorises over the layers.
As discussed above, the Hessian of the log-likelihood for a single datapoint is Kronecker factored, and we denote the two factor matrices as Hλ = Qλ ⊗ Hλ.
2 By further assuming independence between Q andH in all layers, we can approximate the expected Hessian of each layer as:
E [Hλ] = E [Qλ ⊗Hλ] ≈ E [Qλ]⊗ E [Hλ] (7)
Hence, the Hessian of every layer is Kronecker factored over an entire dataset and the Laplace approximation can be approximated by a product of Gaussians. Each Gaussian has a Kronecker factored covariance, corresponding to a matrix normal distribution (Gupta & Nagar, 1999), which considers the two Kronecker factors of the covariance to be the covariances of the rows and columns of a matrix. The two factors are much smaller than the full covariance and allow for significantly more efficient inversion and sampling (we review the matrix normal distribution in Appendix B).
Our resulting posterior for the weights in layer λ is then:
Wλ ∼MN (W ∗ λ , Q̄ −1 λ , H̄ −1 λ ) (8)
In contrast to optimisation methods, we do not need to approximate E [Hλ] as it is only calculated once. However, when it is possible to augment the data (e.g. randomised cropping of images), it may be advantageous. We provide a more detailed discussion of this in Appendix C.
3.4 INCORPORATING THE PRIOR AND REGULARISING THE CURVATURE FACTORS
Just as the log posterior, the Hessian decomposes into a term depending on the data log likelihood and one on the prior. For the commonly used L2-regularisation, corresponding to a Gaussian prior, the Hessian is equal to the precision of the prior times the identity matrix. We approximate this by adding a multiple of the identity to each of the Kronecker factors from the log likelihood:
Hλ = N E
[ −∂
2 log p(D|θ) ∂θ2
] + τI ≈ ( √ N E [Qλ] + √ τI)⊗ ( √ N E [Hλ] + √ τI) (9)
where τ is the precision of the Gaussian prior on the weights andN the size of the dataset. However, we can also treat them as hyperparameters and optimise them w.r.t. the predictive performance on a validation set. We emphasise that this can be done without retraining the network, so it does not impose a large computational overhead and is trivial to parallelise.
Setting N to a larger value than the size of the dataset can be interpreted as including duplicates of the data points as pseudo-observations. Adding a multiple of the uncertainty to the precision matrix decreases the uncertainty about each parameter. This has a regularising effect both on our approximation to the true Laplace, which may be overestimating the variance in certain directions due to ignoring the covariances between the layers, as well as the Laplace approximation itself, which may be placing probability mass in low probability areas of the true posterior.
4 RELATED WORK
Most recent attempts to approximating the posterior of a neural network are based on formulating an approximate distribution to the posterior and optimising the variational lower bound w.r.t. its
2We assume a uniform prior for now, such that the Hessians of the posterior and the log likelihood are equal. We discuss how we incorporate a non-zero Hessian of a prior into the Kronecker factors in the next section.
parameters. (Graves, 2011; Blundell et al., 2015; Kingma et al., 2015) as well as the expectation propagation based approaches of (Hernández-Lobato & Adams, 2015) and (Ghosh et al., 2016) assume independence between the individual weights which, particularly when optimising the KL divergence, often lets the model underestimate the uncertainty about the weights. Gal & Ghahramani (2016) interpret Dropout to approximate the posterior with a mixture of delta functions, assuming independence between the columns. (Lakshminarayanan et al., 2016) suggest using an ensemble of networks for estimating the uncertainty.
Our work is a scalable approximation of (MacKay, 1992). Since the per-layer Hessian of a neural network is infeasible to compute, we suggest a factorisation of the covariance into a Kronecker product, leading to a more efficient matrix normal distribution. The posterior that we obtain is reminiscent of (Louizos & Welling, 2016) and (Sun et al., 2017), who optimise the parameters of a matrix normal distribution as their weights, which requires a modification of the training procedure.
5 EXPERIMENTS
Since the Laplace approximation is a method for predicting in a Bayesian manner and not for training, we focus on comparing to uncertainty estimates obtained from Dropout (Gal & Ghahramani, 2016). The trained networks will be identical, but the prediction methods will differ. We also compare to a diagonal Laplace approximation to highlight the benefit from modelling the covariances between the weights. All experiments are implemented using Theano (Theano Development Team, 2016) and Lasagne (Dieleman et al., 2015).3
5.1 TOY REGRESSION DATASET
As a first experiment, we visualise the uncertainty obtained from the Laplace approximations on a toy regression dataset, similar to (Hernández-Lobato & Adams, 2015). We create a dataset of 20 uniformly distributed points x ∼ U(−4, 4) and sample y ∼ N (x3, 32). In contrast to (HernándezLobato & Adams, 2015), we use a two-layer network with seven units per layer rather than one layer with 100 units. This is because both the input and output are one-dimensional, hence the weight matrices are vectors and the matrix normal distribution reduces to a multivariate normal distribution. Furthermore, the Laplace approximation is sensitive to the ratio of the number of data points to parameters, and we want to visualise it both with and without hyperparameter tuning.
Fig. 1 shows the uncertainty obtained from the Kronecker factored and diagonal Laplace approximation applied to the same network, as well as from a full Laplace approximation and 50, 000 HMC (Neal, 1993) samples. The latter two methods are feasible only for such a small model and dataset. For the diagonal and full Laplace approximation we use the Fisher identity and draw one sample per data point. We set the hyperparameters of the Laplace approximations (see Section 3.4) using a grid search over the likelihood of 20 validation points that are sampled the same way as the training set.
3We make our fork available at: https://github.com/BB-UCL/Lasagne
The regularised Laplace approximations all give an overall good fit to the HMC predictive posterior. Their uncertainty is slightly higher close to the training data and increases more slowly away from the data than that of the HMC posterior. The diagonal and full Laplace approximation require stronger regularisation than our Kronecker factored one, as they have higher uncertainty when not regularised. In particular the full Laplace approximation vastly overestimates the uncertainty without additional regularisation, leading to a bad predictive mean (see Appendix E for the corresponding figures), as the Hessian of the log likelihood is underdetermined. This is commonly the case in deep learning, as the number of parameters is typically much larger than the number of data points. Hence restricting the structure of the covariance is not only a computational necessity for most architectures, but also allows for more precise estimation of the approximate covariance.
5.2 OUT-OF-DISTRIBUTION UNCERTAINTY
For a more realistic test, similar to (Louizos & Welling, 2017), we assess the uncertainty of the predictions when classifying data from a different distribution than the training data. For this we train a network with two layers of 1024 hidden units and ReLU transfer functions to classify MNIST digits. We use a learning rate of 10−2 and momentum of 0.9 for 250 epochs. We apply Dropout with p=0.5 after each inner layer, as our chief interest is to compare against its uncertainty estimates. We further use L2-regularisation with a factor of 10
−2 and randomly binarise the images during training according to their pixel intensities and draw 1, 000 such samples per datapoint for estimating the curvature factors. We use this network to classify the images in the notMNIST dataset4, which contains 28×28 grey-scale images of the letters ‘A’ to ‘J’ from various computer fonts, i.e. not digits. An ideal classifier would make uniform predictions over its classes.
We compare the uncertainty obtained by predicting the digit class of the notMNIST images using 1. a deterministic forward pass through the Dropout trained network, 2. by sampling different Dropout masks and averaging the predictions, and by sampling different weight matrices from 3. the matrix normal distribution obtained from our Kronecker factored Laplace approximation as well as 4. the diagonal one. As an additional baseline similar to (Blundell et al., 2015; Graves, 2011), we compare to a network with identical architecture with a fully factorised Gaussian (FFG) approximate posterior on the weights and a standard normal prior. We train the model on the variational lower bound using the reparametrisation trick (Kingma & Welling, 2013). We use 100 samples for the stochastic forward passes and optimise the hyperparameters of the Laplace ap-
proximations w.r.t. the cross-entropy on the validation set of MNIST.
We measure the uncertainty of the different methods as the entropy of the predictive distribution, which has a minimal value of 0 when a single class is predicted with certainty and a maximum of about 2.3 for uniform predictions. Fig. 2 shows the inverse empirical cumulative distribution of the entropy values obtained from the four methods. Consistent with the results in (Gal & Ghahramani, 2016), averaging the probabilities of multiple passes through the network yields predictions with higher uncertainty than a deterministic pass that approximates the geometric average (Srivastava et al., 2014). However, there still are some images that are predicted to be a digit with certainty. Our Kronecker factored Laplace approximation makes hardly any predictions with absolute certainty and assigns high uncertainty to most of the letters as desired. The diagonal Laplace approximation required stronger regularisation towards predicting deterministically, yet it performs similarly to Dropout. As shown in Table 1, however, the network makes predictions on the test set of MNIST
4From: http://yaroslavvb.blogspot.nl/2011/09/notmnist-dataset.html
with similar accuracy to the deterministic forward pass and MC Dropout when using our approximation. The variational factorised Gaussian posterior has low uncertainty as expected.
5.3 ADVERSARIAL EXAMPLES
To further test the robustness of our prediction method close to the data distribution, we perform an adversarial attack on a neural network. As first demonstrated in (Szegedy et al., 2013), neural networks are prone to being fooled by gradient-based changes to their inputs. Li & Gal (2017) suggest, and provide empirical support, that Bayesian models may be more robust to such attacks, since they implicitly form an infinitely large ensemble by integrating over the model parameters. For our experiments, we use the fully connected net trained on MNIST from the previous section and compare the sensitivity of the different prediction methods for two kinds of adversarial attacks.
First, we use the untargeted Fast Gradient Sign method xadv = x− η sgn(∇x maxy log p (M)(y|x)) suggested in (Goodfellow et al., 2014), which takes the gradient of the class predicted with maximal probability by method M w.r.t. the input x and reduces this probability with varying step size η. This step size is rescaled by the difference between the maximal and minimal value per dimension in the dataset. It is to be expected that this method generates examples away from the data manifold, as there is no clear subset of the data that corresponds to e.g. ”not ones”.
Fig. 3 shows the average predictive uncertainty and the accuracy on the original class on the MNIST test set as the step size η increases. The Kronecker factored Laplace approximation achieves significantly higher uncertainty than any other prediction method as the images move away from the data. Both the diagonal and the Kronecker factored Laplace maintain higher accuracy than MC Dropout on their original predictions. Interestingly, the deterministic forward pass appears to be most robust in terms of accuracy, however it has much smaller uncertainty on the predictions it makes and will confidently predict a false class for most images, whereas the other methods are more uncertain.
Furthermore, we perform a targeted attack that attempts to force the network to predict a specific class, in our case ‘0’ following (Li & Gal, 2017). Hence, for each method, we exclude all data points in the test set that are already predicted as ‘0’. The updates are of similar form to the untargeted attack, however they increase the probability of the pre-specified class y rather than decreasing the current maximum as x(t+1)y = x (t) y + η sgn(∇x log p (M)(y|x(t)y )), where x (0) y = x.
We use a step size of η=10−2 for the targeted attack. The uncertainty and accuracy on the original and target class are shown in Fig. 4. Here, the Kronecker factored Laplace approximation has slightly smaller uncertainty at its peak in comparison to the other methods, however it appears to be much more robust. It only misclassifies over 50% of the images after about 20 steps, whereas for the other methods this is the case after roughly 10 steps and reaches 100% accuracy on the target class after almost 50 updates, whereas the other methods are fooled on all images after about 25 steps.
In conjunction with the experiment on notMNIST, it appears that the Laplace approximation achieves higher uncertainty than Dropout away from the data, as in the untargeted attack. In the targeted attack it exhibits smaller uncertainty than Dropout, yet it is more robust to having its prediction changed. The diagonal Laplace approximation again performs similarly to Dropout.
5.4 UNCERTAINTY ON MISCLASSIFICATIONS
To highlight the scalability of our method, we apply it to a state-of-the-art convolutional network architecture. Recently, deep residual networks (He et al., 2016a;b) have been the most successful
ones among those. As demonstrated in (Grosse & Martens, 2016), Kronecker factored curvature methods are applicable to convolutional layers by interpreting them as matrix-matrix multiplications.
We compare our uncertainty estimates on wide residual networks (Zagoruyko & Komodakis, 2016), a recent variation that achieved competitive performance on CIFAR100 (Krizhevsky & Hinton, 2009) while, in contrast to most other residual architectures, including Dropout at specific points. While this does not correspond to using Dropout in the Bayesian sense (Gal & Ghahramani, 2015), it allows us to at least compare our method to the uncertainty estimates obtained from Dropout.
We note that it is straightforward to incorporate batch normalisation (Ioffe & Szegedy, 2015) into the curvature backpropagation algorithms, so we apply a standard Laplace approximation to its parameters as well. We are not aware of any interpretation of Dropout as performing Bayesian inference on the parameters of batch normalisation. Further implementation details are in Appendix G.
Again, the accuracy of the prediction methods is comparable (see Table 2 in Appendix F). For calculating the curvature factors, we draw 5, 000 samples per image using the same data augmentation as during training, effectively increasing the dataset size to 2.5×108. The diagonal approximation had to be regularised to the extent of becoming deterministic, so we omit it from the results.
In Fig. 5 we compare the distribution of the predictive uncertainty on the test set.5 We distinguish between the uncertainty on correct and incorrect classifications, as the mistakes of a system used in practice may be less severe if the network can at least indicate that it is uncertain. Thus, high uncertainty on misclassifications and low uncertainty on correct ones would be desirable, such that a system could return control to a human expert when it can not make a confident decision. In general, the network tends to be more uncertain on its misclassifcations than its correct ones regardless of whether it was trained with or without Dropout and of the method used for prediction. Both Dropout and the Laplace approximation similarly increase the uncertainty in the predictions, however this is irrespective of the correctness of the classification. Yet, our experiments show that the Kronecker factored Laplace approximation can be scaled to modern convolutional networks and maintain good classification accuracy while having similar uncertainty about the predictions as Dropout.
We had to use much stronger regularisation for the Laplace approximation on the wide residual network, possibly because the block-diagonal approximation becomes more inaccurate on deep networks, possibly because the number of parameters is much higher relative to the number of data. It would be interesting to see how the Laplace approximations behaves on a much larger dataset like ImageNet for similarly sized networks, where we have a better ratio of data to parameters and curvature directions. However, even on a relatively small dataset like CIFAR we did not have to regularise the Laplace approximation to the degree of the posterior becoming deterministic.
5We use the first 5, 000 images as a validation set to tune the hyperparameters of our Laplace approximation and the final 5, 000 ones for evaluating the predictive uncertainty on all methods.
6 CONCLUSION
We presented a scalable approximation to the Laplace approximation for the posterior of a neural network and provided experimental results suggesting that the uncertainty estimates are on par with current alternatives like Dropout, if not better. It enables practitioners to obtain principled uncertainty estimates from their models, even if they were trained in a maximum likelihood/MAP setting.
There are many possible extensions to this work. One would be to automatically determine the scale and regularisation hyperparameters of the Kronecker factored Laplace approximation using the model evidence similar to how (MacKay, 1992) interpolates between the data log likelihood and the width of the prior. The model evidence could further be used to perform Bayesian model averaging on ensembles of neural networks, potentially improving their generalisation ability and uncertainty estimates. A challenging application would be active learning, where only little data is available relative to the number of curvature directions that need to be estimated.
ACKNOWLEDGEMENTS
This work was supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1. We thank the anonymous reviewers for their feedback and Harshil Shah for his comments on an earlier draft of this paper.
A DERIVATION OF THE ACTIVATION HESSIAN RECURSION
Here, we provide the basic derivation of the factorisation of the diagonal blocks of the Hessian in Eq. 1 and the recursive formula for calculatingH as presented in (Botev et al., 2017). The Hessian of a neural network with parameters θ as defined in the main text has elements:
[H]ij = ∂2
∂θi∂θj E(θ) (10)
For a given layer λ, the gradient w.r.t. a weight Wλa,b is:
∂E ∂Wλa,b = ∑ i ∂hλi ∂Wλa,b ∂E ∂hλi = aλ−1b ∂E ∂hλa (11)
Keeping λ fixed and differentiating again, we find that the per-sample Hessian of that layer is:
[Hλ](a,b),(c,d) ≡ ∂2E
∂Wλa,b∂W λ c,d
= aλ−1b a λ−1 d [Hλ]a,c (12)
where
[Hλ]a,b = ∂2E
∂hλa∂h λ b
(13)
is the pre-activation Hessian.
We can reexpress this in matrix notation as a Kronecker product as in Eq. 1:
Hλ = ∂2E ∂ vec (Wλ)∂ vec (Wλ) = ( aλ−1a T λ−1 ) ⊗Hλ (14)
The pre-activation Hessian can be calculated recursively as:
Hλ = BλW T λ+1Hλ+1Wλ+1Bλ +Dλ (15)
where the diagonal matrices B and D are defined as:
Bλ = diag (f ′ λ(hλ)) (16)
Dλ = diag (f ′′ λ (hλ)
∂E ∂aλ ) (17)
f ′ and f ′′ denote the first and second derivative of the transfer function. The recursion is initialised with the Hessian of the error w.r.t. the linear network outputs.
For further details and on how to calculate the diagonal blocks of the Gauss-Newton and Fisher matrix, we refer the reader to (Botev et al., 2017) and (Martens & Grosse, 2015).
B MATRIX NORMAL DISTRIBUTION
The matrix normal distribution (Gupta & Nagar, 1999) is a multivariate distribution over an entire matrix of shape n × p rather than just a vector. In contrast to the multivariate normal distribution, it is parameterised by two p.s.d. covariance matrices, U : n × n and V : p × p, which indicate the covariance of the rows and columns respectively. In addition it has a mean matrix M : n× p. A vectorised sample from a matrix normal distribution X ∼MN (M,U, V ) corresponds to a sample from a normal distribution vec(X) ∼ N (vec(M), U ⊗ V ). However, samples can be drawn more efficiently as X = M + AZB with Z ∼ MN (0, I, I), and AAT = U and BTB = V . The sample Z corresponds to a sample from a normal distribution of length np that has been reshaped to a n× p matrix. This is more efficient in the sense that we only need to calculate two matrix-matrix products of small matrices, rather than a matrix-vector product with one big one.
C APPROXIMATION OF THE EXPECTED ACTIVATION HESSIAN
While the square root of Qλ is calculated during the forward pass on all layers,H requires an additional backward pass. Strictly speaking, it is not essential to approximate E [H] for the Kronecker factored Laplace approximation, as in contrast to optimisation procedures the curvature only needs to be calculated once and is thus not time critical. For datasets of the scale of ImageNet and the networks used for such datasets, it would still be impractically slow to perform the calculation for every data point individually. Furthermore, as most datasets are augmented during training, e.g. random cropping or reflections of images, the curvature of the network can be estimated using the same augmentations, effectively increasing the size of the dataset by orders of magnitude. Thus, we make use of the minibatch approximation in our experiments — as we make use of data augmentation — in order to demonstrate its practical applicability.
We note that E [H] can be calculated exactly by running KFRA (Botev et al., 2017) with a minibatchsize of one, and then averaging the results. KFAC (Martens & Grosse, 2015), in contrast, stochastically approximates the Fisher matrix, so even when run for every datapoint separately, it cannot calculate the curvature factor exactly.
In the following, we also show figures for the adversarial experiments in which we calculate the curvature per datapoint and without data augmentation:
Fig. 6 and Fig. 7 show how the Laplace approximation with the curvature estimated from 1000 randomly sampled binary MNIST images and the activation Hessian calculated with a minibatch size of 100 performs in comparison to the curvature factor being calculated without any data augmentation with a batch size of 100 or exactly. We note that without data augmentation we had to use much stronger regularisation of the curvature factors, in particular we had to add a non-negligible multiple of the identity to the factors, whereas with data augmentation it was only needed to ensure that the matrices are invertible. The Kronecker factored Laplace approximation reaches particularly high uncertainty on the untargeted adversarial attack and is most robust on the targeted attack when using data augmentation, suggesting that it is particularly well suited for large datasets and ones
where some form of data augmentation can be applied. The difference between approximating the activation Hessian over a minibatch and calculating it exactly appears to be negligible.
D MEMORY AND COMPUTATIONAL REQUIREMENTS
If we denote the dimensionality of the input to layer λ as Dλ−1 and its output as Dλ, the curvature factors correspond to the two precision matrices with Dλ−1(Dλ−1+1)2 and Dλ(Dλ+1) 2 ‘parameters’ to estimate, since they are symmetric. So across a network, the number of curvature directions that we are estimating grows linearly in the number of layers and quadratically in the dimension of the layers, i.e. the number of columns of the weight matrices. The size of the full Hessian, on the other hand, grows quadratically in the number of layers and with the fourth power in the dimensionality of the layers (assuming they are all the same size).
Once the curvature factors are calculated, which only needs to be done once, we use their Cholesky decomposition to solve two triangular linear systems when sampling weights from the matrix normal distribution. We use the same weight samples for each minibatch, i.e. we do not sample a weight matrix per datapoint. This is for computational efficiency and does not change the expectation.
One possibility to save computation time would be to sample a fixed set of weight matrices from the approximate posterior — in order to avoid solving the linear system on every forward pass — and treat the networks that they define as an ensemble. The individual ensemble members can be evaluated in parallel and their outputs averaged, which can be done with a small overhead over evaluating a single network given sufficient compute resources. A further speed up can be achieved by distilling the predictive distributions of the Laplace network into a smaller, deterministic feedforward network as successfully demonstrated in (Balan et al., 2015) for posterior samples using HMC.
E COMPLEMENTARY FIGURES FOR THE TOY DATASET
Fig. 8 shows the different Laplace approximations (Kronecker factored, diagonal, full) from the main text without any hyperparameter tuning. The figure of the uncertainty obtained from samples using HMC is repeated. Note that the scale is larger than in the main text due to the high uncertainty of the Laplace approximations.
The Laplace approximations are increasingly uncertain away from the data, as the true posterior estimated from HMC samples, however they all overestimate the uncertainty without regularisation. This is easy to fix by optimising the hyperparameters on a validation set as discussed in the main text, resulting in posterior uncertainty much more similar to the true posterior. As previously discussed in (Botev et al., 2017), the Hessian of a neural network is usually underdetermined as the number of data points is much smaller than the number of parameters — in our case we have 20 data points to estimate a 78×78 precision matrix. This leads to the full Laplace approximation vastly overestimating the uncertainty and a bad predictive mean. Both the Kronecker factored and the diagonal approximation exhibit smaller variance than the full Laplace approximation as they restrict the structure of the precision matrix. Consistently with the other experiments, we find the diagonal
Laplace approximation to place more mass in low probability areas of the posterior than the Kronecker factored approximation, resulting in higher variance on the regression problem. This leads to a need for greater regularisation of the diagonal approximation to obtain acceptable predictive performance, and underestimating the uncertainty.
F PREDICTION ACCURACY
This section shows the accuracy values obtained from the different predictions methods on the feedforward networks for MNIST and the wide residual network for CIFAR100. The results for MNIST are shown in Table 1 and the results for CIFAR in Table 2.
In all cases, neither MC Dropout nor the Laplace approximation significantly change the classification accuracy of the network in comparison to a deterministic forward pass.
G IMPLEMENTATION DETAILS FOR RESIDUAL NETWORKS
Our wide residual network has n=3 block repetitions and a width factor of k=8 on CIFAR100 with and without Dropout using hyperparameters taken from (Zagoruyko & Komodakis, 2016): the network parameters are trained on a cross-entropy loss using Nesterov momentum with an initial learning rate of 0.1 and momentum of 0.9 for 200 epochs with a minibatch size of 128. We decay the learning rate every 50 epochs by a factor of 0.2, which is slightly different to the schedule used in (Zagoruyko & Komodakis, 2016) (they decay after 60, 120 and 160 epochs). As the original authors, we use L2-regularisation with a factor of 5×10 −4.
We make one small modification to the architecture: instead of downsampling with 1×1 convolutions with stride 2, we use 2×2 convolutions. This is due to Theano not supporting the transformation of images into the patches extracted by a convolution for 1×1 convolutions with stride greater than 1, which we require for our curvature backpropagation through convolutions.
We apply a standard Laplace approximation to the batch normalisation parameters — a Kronecker factorisation is not needed, since the parameters are one-dimensional. When calculating the curvature factors, we use the moving averages for the per-layer means and standard deviations obtained after training, in order to maintain independence between the data points in a minibatch.
We need to make a further approximation to the ones discussed in Section 2.2 when backpropagating the curvature for residual networks. The residual blocks compute a function of the form res(x) = x + fφ(x), where fφ typically is a sequence of convolutions, batch normalisation and elementwise nonlinearities. This means that we would need to pass back two curvature matrices, one for each summand. However, this would double the number of backpropagated matrices for each residual connection, hence the computation time/memory requirements would grow exponentially in the number of residual blocks. Therefore, we simply add the curvature matrices after each residual connection. | 1. What is the main contribution of the paper in the field of neural networks?
2. What are the strengths of the proposed method, particularly in comparison to other uncertainty estimation techniques?
3. What are the weaknesses of the paper, specifically regarding its claims and experimental designs?
4. How does the reviewer assess the clarity, quality, and novelty of the paper's content?
5. Are there any questions or concerns regarding the Kronecker-factor approximation of the Hessian and its relation to training? | Review | Review
This paper proposes a novel scalable method for incorporating uncertainty estimate in neural networks, in addition to existing methods using, for example, variational inference and expectation propagation. The novelty is in extending the Laplace approximation introduced in MacKay (1992) using a Kronecker-factor approximation of the Hessian. The paper is well written and easy to follow. It provides extensive references to related works, and supports its claims with convincing experiments from different domains.
Pros:
-A novel method in an important and interesting direction.
-It is a prediction method, so can be applied on existing trained neural networks (however, see the first con).
-Well-written with high clarity.
-Extensive and convincing experiments.
Cons:
-Although it is a predictive method, it's still worth discussing how this method relates to training. For example, I suspect it works better when the model is trained with second-order method, as the resulting Taylor approximation (eq. 2) of the log-likelihood function might have higher quality when both terms are explicitly used in optimisation.
-The difference between using KFAC and KFRA is unclear, or should be better explained if they are identical in this context. Botev et al. 2017 reports they are slightly different in approximating the Gaussian Newton matrix.
-Acronyms, even well-known, are better defined before using (e.g., EP, PSD).
-Need more details of the optimisation method used in experiments, especially the last one. |
ICLR | Title
A Scalable Laplace Approximation for Neural Networks
Abstract
We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture.
1 INTRODUCTION
Neural networks are most commonly trained in a maximum a posteriori (MAP) setting, which only yields point estimates of the parameters, ignoring any uncertainty about them. This often leads to overconfident predictions, especially in regimes that are weakly covered by training data or far away from the data manifold. While the confidence of wrong predictions is usually irrelevant in a research context, it is essential that a Machine Learning algorithm knows when it does not know in the real world, as the consequences of mistakes can be fatal, be it when driving a car or diagnosing a disease.
The Bayesian framework of statistics provides a principled way for avoiding overconfidence in the parameters by treating them as unknown quantities and integrating over all possible values. Specifically, for the prediction of new data under a model, it fits a posterior distribution over the parameters given the training data and weighs the contribution of each setting of the parameters to the prediction by the probability of the data under those parameters times their prior probability. However, the posterior of neural networks is usually intractable due to their size and nonlinearity.
There has been previous interest in integrating neural networks into the Bayesian framework (MacKay, 1992; Hinton & Van Camp, 1993; Neal, 1993; Barber & Bishop, 1998), however these approaches were designed for small networks by current standards. Recent adaptations to architectures of modern scale rely on crude approximations of the posterior to become tractable. All of (Graves, 2011; Hernández-Lobato & Adams, 2015; Blundell et al., 2015) assume independence between the individual weights. While they achieve good results on small datasets, this strong restriction of the posterior is susceptible to underestimating the uncertainty, in particular when optimising the variational bound. The approach in (Gal & Ghahramani, 2016) requires the use of certain stochastic regularisers which are not commonly present in most recent architectures. Furthermore, it is not clear if the approximate posterior defined by these regularisers is a good fit to the true posterior.
Recent work on second-order optimisation of neural networks (Martens & Grosse, 2015; Botev et al., 2017) has demonstrated that the diagonal blocks of the curvature can be well approximated by a Kronecker product. We combine this insight with the idea of modelling the posterior over the weights as a Gaussian, using a Laplace approximation (MacKay, 1992) with Kronecker factored covariance matrices. This leads to a computationally efficient matrix normal posterior distribution
∗Corresponding author: j.ritter@cs.ucl.ac.uk
(Gupta & Nagar, 1999) over the weights of every layer. Since the Laplace approximation is applied after training, our approach can be used to obtain uncertainty estimates from existing networks.
2 THE CURVATURE OF NEURAL NETWORKS
Our method is inspired by recent Kronecker factored approximations of the curvature of a neural network (Martens & Grosse, 2015; Botev et al., 2017) for optimisation and we give a high-level review of these in the following. While the two methods approximate the Gauss-Newton and Fisher matrix respectively, as they are guaranteed to be positive semi-definite (p.s.d.), we base all of our discussion on the Hessian in order to be as general as possible.
2.1 NEURAL NETWORK NOTATION
We denote a feedforward network as taking an input a0 = x and producing an output hL. The intermediate representations for layers λ = 1, ..., L are denoted as hλ = Wλaλ−1 and aλ = fλ(hλ). We refer to aλ as the activations, and hλ as the (linear) pre-activations. The bias terms are absorbed into the Wλ by appending a 1 to each aλ. The network parameters are optimised w.r.t. an error function E(y, hL) for targets y. Most commonly used error functions, such as squared error and categorical cross-entropy, can be interpreted as exponential family negative log likelihoods − log p(y|hL).
2.2 KRONECKER FACTORED SECOND-ORDER OPTIMISATION
Traditional second-order methods use either the Hessian matrix or a positive semi-definite approximation thereof to generate parameter updates of the form ∆ = C−1g, where C is the chosen curvature matrix and g the gradient of the error function parameterised by the network. However, this curvature matrix is infeasbile to compute for modern neural networks as their number of parameters is often in the millions, rendering the size of C of the order of several terabytes.
Recent work (Martens & Grosse, 2015; Botev et al., 2017) exploits that, for a single data point, the diagonal blocks of these curvature matrices are Kronecker factored:
Hλ = ∂2E
∂ vec(Wλ)∂ vec(Wλ) = Qλ ⊗Hλ (1)
where Hλ is the Hessian w.r.t. the weights in layer λ. Qλ = aλ−1a T λ−1 denotes the covariance of the incoming activations aλ−1 and Hλ = ∂ 2 E
∂hλ∂hλ the pre-activation Hessian, i.e. the Hessian of the
error w.r.t. the linear pre-activations hλ in a layer. We provide the derivation for this result as well as the recursion for calculatingH in Appendix A. The Kronecker factorisation holds two key advantages: the matrices that need be computed and stored are much smaller — if we assume all layers to be of dimensionality D, the two factors are each of sizeD2, whereas the full Hessian for the weights of only one layer would haveD4 elements. Furthermore, the inverse of a Kronecker product is equal to the Kronecker product of the inverses, so it is only necessary to invert those two moderately sized matrices.
In order to maintain this structure over a minibatch of data, all Kronecker factored second-order methods make two core approximations: First, they only model the diagonal blocks corresponding to the weights of a layer, such that the curvature decomposes into L independent matrices. Second, they assume Qλ andHλ to be independent. This is in order to maintain the Kronecker factorisation in expectation, i.e. E [Qλ ⊗Hλ] ≈ E [Qλ] ⊗ E [Hλ], since the expectation of a Kronecker product is not guaranteed to be Kronecker factored itself.
The main difference between the Kronecker factored second-order optimisers lies in how they efficiently approximate E [Hλ]. For exact calculation, it would be necessary to pass back an entire matrix per data point in a minibatch, which imposes infeasible memory and computational requirements. KFRA (Botev et al., 2017) simply passes back the expectation at every layer, while KFAC (Martens & Grosse, 2015) utilises the Fisher identity to only propagate a vector rather than a matrix, approximating the Kronecker factors with a stochastic rank-one matrix for each data point.
The diagonal blocks of the Hessian and Gauss-Newton matrix are equal for neural networks with piecewise linear activation functions (Botev et al., 2017), thus both methods can be used to directly approximate the diagonal blocks of the Hessian of such networks, as the Gauss-Newton and Fisher are equivalent for networks that parameterise an exponential family log likelihood.
3 A SCALABLE LAPLACE APPROXIMATION FOR NEURAL NETWORKS
3.1 THE LAPLACE APPROXIMATION
The standard Laplace approximation is obtained by taking the second-order Taylor expansion around a mode of a distribution. For a neural network, such a mode can be found using standard gradientbased methods. Specifically, if we approximate the log posterior over the weights of a network given some data D around a MAP estimate θ∗, we obtain:
log p(θ|D) ≈ log p(θ∗|D)− 1 2 (θ − θ∗)TH̄(θ − θ∗) (2)
where θ = [vec(W1), ..., vec(WL)] is the stacked vector of weights and H̄ = E [H] the average Hessian of the negative log posterior1. The first order term is missing because we expand the function around a maximum θ∗, where the gradient is zero. If we exponentiate this equation, it is easy to notice that the right-hand side is of Gaussian functional form for θ, thus we obtain a normal distribution by integrating over it. The posterior over the weights is then approximated as Gaussian:
θ ∼ N (θ∗, H̄−1) (3)
assuming H̄ is p.s.d. We can then approximate the posterior mean when predicting on unseen data D∗ by averaging the predictions of T Monte Carlo samples θ(t) from the approximate posterior:
p(D∗|D) = ∫ p(D∗|θ)p(θ|D)dθ ≈ 1
T T∑ t=1 p(D∗|θ(t)) (4)
3.2 DIAGONAL LAPLACE APPROXIMATION
Unfortunately, it is not feasible to compute or invert the Hessian matrix w.r.t. all of the weights jointly. An approximation that is easy to compute in modern automatic differentiation frameworks is the diagonal of the Fisher matrix F , which is simply the expectation of the squared gradients:
H ≈ diag(F ) = diag(E [ ∇θ log p(y|x)∇θ log p(y|x) T ] ) = diag(E [ (∇θ log p(y|x)) 2 ] ) (5)
where diag extracts the diagonal of a matrix or turns a vector into a diagonal matrix. Such diagonal approximations to the curvature of a neural network have been used successfully for pruning the weights (LeCun et al., 1990) and, more recently, for transfer learning (Kirkpatrick et al., 2017).
This corresponds to modelling the weights with a Normal distribution with diagonal covariance:
vec(Wλ) ∼ N (vec(W ∗ λ ),diag(Fλ) −1) for λ = 1, . . . , L (6)
Unfortunately, even if the Taylor approximation is accurate, this will place significant probability mass in low probability areas of the true posterior if some weights exhibit high covariance.
1The average Hessian is typically scaled by the number of data points N . In order to keep the notation uncluttered, we develop our basic methods in terms of the average Hessian and discuss the scaling separately.
3.3 KRONECKER FACTORED LAPLACE APPROXIMATION
So while it is desirable to model the covariance between the weights, some approximations are needed in order to remain computationally efficient. First, we assume the weights of the different layers to be independent. This corresponds to the block-diagonal approximation in KFAC and KFRA, which empirically preserves sufficient information about the curvature to obtain competitive optimisation performance. For our purposes this means that our posterior factorises over the layers.
As discussed above, the Hessian of the log-likelihood for a single datapoint is Kronecker factored, and we denote the two factor matrices as Hλ = Qλ ⊗ Hλ.
2 By further assuming independence between Q andH in all layers, we can approximate the expected Hessian of each layer as:
E [Hλ] = E [Qλ ⊗Hλ] ≈ E [Qλ]⊗ E [Hλ] (7)
Hence, the Hessian of every layer is Kronecker factored over an entire dataset and the Laplace approximation can be approximated by a product of Gaussians. Each Gaussian has a Kronecker factored covariance, corresponding to a matrix normal distribution (Gupta & Nagar, 1999), which considers the two Kronecker factors of the covariance to be the covariances of the rows and columns of a matrix. The two factors are much smaller than the full covariance and allow for significantly more efficient inversion and sampling (we review the matrix normal distribution in Appendix B).
Our resulting posterior for the weights in layer λ is then:
Wλ ∼MN (W ∗ λ , Q̄ −1 λ , H̄ −1 λ ) (8)
In contrast to optimisation methods, we do not need to approximate E [Hλ] as it is only calculated once. However, when it is possible to augment the data (e.g. randomised cropping of images), it may be advantageous. We provide a more detailed discussion of this in Appendix C.
3.4 INCORPORATING THE PRIOR AND REGULARISING THE CURVATURE FACTORS
Just as the log posterior, the Hessian decomposes into a term depending on the data log likelihood and one on the prior. For the commonly used L2-regularisation, corresponding to a Gaussian prior, the Hessian is equal to the precision of the prior times the identity matrix. We approximate this by adding a multiple of the identity to each of the Kronecker factors from the log likelihood:
Hλ = N E
[ −∂
2 log p(D|θ) ∂θ2
] + τI ≈ ( √ N E [Qλ] + √ τI)⊗ ( √ N E [Hλ] + √ τI) (9)
where τ is the precision of the Gaussian prior on the weights andN the size of the dataset. However, we can also treat them as hyperparameters and optimise them w.r.t. the predictive performance on a validation set. We emphasise that this can be done without retraining the network, so it does not impose a large computational overhead and is trivial to parallelise.
Setting N to a larger value than the size of the dataset can be interpreted as including duplicates of the data points as pseudo-observations. Adding a multiple of the uncertainty to the precision matrix decreases the uncertainty about each parameter. This has a regularising effect both on our approximation to the true Laplace, which may be overestimating the variance in certain directions due to ignoring the covariances between the layers, as well as the Laplace approximation itself, which may be placing probability mass in low probability areas of the true posterior.
4 RELATED WORK
Most recent attempts to approximating the posterior of a neural network are based on formulating an approximate distribution to the posterior and optimising the variational lower bound w.r.t. its
2We assume a uniform prior for now, such that the Hessians of the posterior and the log likelihood are equal. We discuss how we incorporate a non-zero Hessian of a prior into the Kronecker factors in the next section.
parameters. (Graves, 2011; Blundell et al., 2015; Kingma et al., 2015) as well as the expectation propagation based approaches of (Hernández-Lobato & Adams, 2015) and (Ghosh et al., 2016) assume independence between the individual weights which, particularly when optimising the KL divergence, often lets the model underestimate the uncertainty about the weights. Gal & Ghahramani (2016) interpret Dropout to approximate the posterior with a mixture of delta functions, assuming independence between the columns. (Lakshminarayanan et al., 2016) suggest using an ensemble of networks for estimating the uncertainty.
Our work is a scalable approximation of (MacKay, 1992). Since the per-layer Hessian of a neural network is infeasible to compute, we suggest a factorisation of the covariance into a Kronecker product, leading to a more efficient matrix normal distribution. The posterior that we obtain is reminiscent of (Louizos & Welling, 2016) and (Sun et al., 2017), who optimise the parameters of a matrix normal distribution as their weights, which requires a modification of the training procedure.
5 EXPERIMENTS
Since the Laplace approximation is a method for predicting in a Bayesian manner and not for training, we focus on comparing to uncertainty estimates obtained from Dropout (Gal & Ghahramani, 2016). The trained networks will be identical, but the prediction methods will differ. We also compare to a diagonal Laplace approximation to highlight the benefit from modelling the covariances between the weights. All experiments are implemented using Theano (Theano Development Team, 2016) and Lasagne (Dieleman et al., 2015).3
5.1 TOY REGRESSION DATASET
As a first experiment, we visualise the uncertainty obtained from the Laplace approximations on a toy regression dataset, similar to (Hernández-Lobato & Adams, 2015). We create a dataset of 20 uniformly distributed points x ∼ U(−4, 4) and sample y ∼ N (x3, 32). In contrast to (HernándezLobato & Adams, 2015), we use a two-layer network with seven units per layer rather than one layer with 100 units. This is because both the input and output are one-dimensional, hence the weight matrices are vectors and the matrix normal distribution reduces to a multivariate normal distribution. Furthermore, the Laplace approximation is sensitive to the ratio of the number of data points to parameters, and we want to visualise it both with and without hyperparameter tuning.
Fig. 1 shows the uncertainty obtained from the Kronecker factored and diagonal Laplace approximation applied to the same network, as well as from a full Laplace approximation and 50, 000 HMC (Neal, 1993) samples. The latter two methods are feasible only for such a small model and dataset. For the diagonal and full Laplace approximation we use the Fisher identity and draw one sample per data point. We set the hyperparameters of the Laplace approximations (see Section 3.4) using a grid search over the likelihood of 20 validation points that are sampled the same way as the training set.
3We make our fork available at: https://github.com/BB-UCL/Lasagne
The regularised Laplace approximations all give an overall good fit to the HMC predictive posterior. Their uncertainty is slightly higher close to the training data and increases more slowly away from the data than that of the HMC posterior. The diagonal and full Laplace approximation require stronger regularisation than our Kronecker factored one, as they have higher uncertainty when not regularised. In particular the full Laplace approximation vastly overestimates the uncertainty without additional regularisation, leading to a bad predictive mean (see Appendix E for the corresponding figures), as the Hessian of the log likelihood is underdetermined. This is commonly the case in deep learning, as the number of parameters is typically much larger than the number of data points. Hence restricting the structure of the covariance is not only a computational necessity for most architectures, but also allows for more precise estimation of the approximate covariance.
5.2 OUT-OF-DISTRIBUTION UNCERTAINTY
For a more realistic test, similar to (Louizos & Welling, 2017), we assess the uncertainty of the predictions when classifying data from a different distribution than the training data. For this we train a network with two layers of 1024 hidden units and ReLU transfer functions to classify MNIST digits. We use a learning rate of 10−2 and momentum of 0.9 for 250 epochs. We apply Dropout with p=0.5 after each inner layer, as our chief interest is to compare against its uncertainty estimates. We further use L2-regularisation with a factor of 10
−2 and randomly binarise the images during training according to their pixel intensities and draw 1, 000 such samples per datapoint for estimating the curvature factors. We use this network to classify the images in the notMNIST dataset4, which contains 28×28 grey-scale images of the letters ‘A’ to ‘J’ from various computer fonts, i.e. not digits. An ideal classifier would make uniform predictions over its classes.
We compare the uncertainty obtained by predicting the digit class of the notMNIST images using 1. a deterministic forward pass through the Dropout trained network, 2. by sampling different Dropout masks and averaging the predictions, and by sampling different weight matrices from 3. the matrix normal distribution obtained from our Kronecker factored Laplace approximation as well as 4. the diagonal one. As an additional baseline similar to (Blundell et al., 2015; Graves, 2011), we compare to a network with identical architecture with a fully factorised Gaussian (FFG) approximate posterior on the weights and a standard normal prior. We train the model on the variational lower bound using the reparametrisation trick (Kingma & Welling, 2013). We use 100 samples for the stochastic forward passes and optimise the hyperparameters of the Laplace ap-
proximations w.r.t. the cross-entropy on the validation set of MNIST.
We measure the uncertainty of the different methods as the entropy of the predictive distribution, which has a minimal value of 0 when a single class is predicted with certainty and a maximum of about 2.3 for uniform predictions. Fig. 2 shows the inverse empirical cumulative distribution of the entropy values obtained from the four methods. Consistent with the results in (Gal & Ghahramani, 2016), averaging the probabilities of multiple passes through the network yields predictions with higher uncertainty than a deterministic pass that approximates the geometric average (Srivastava et al., 2014). However, there still are some images that are predicted to be a digit with certainty. Our Kronecker factored Laplace approximation makes hardly any predictions with absolute certainty and assigns high uncertainty to most of the letters as desired. The diagonal Laplace approximation required stronger regularisation towards predicting deterministically, yet it performs similarly to Dropout. As shown in Table 1, however, the network makes predictions on the test set of MNIST
4From: http://yaroslavvb.blogspot.nl/2011/09/notmnist-dataset.html
with similar accuracy to the deterministic forward pass and MC Dropout when using our approximation. The variational factorised Gaussian posterior has low uncertainty as expected.
5.3 ADVERSARIAL EXAMPLES
To further test the robustness of our prediction method close to the data distribution, we perform an adversarial attack on a neural network. As first demonstrated in (Szegedy et al., 2013), neural networks are prone to being fooled by gradient-based changes to their inputs. Li & Gal (2017) suggest, and provide empirical support, that Bayesian models may be more robust to such attacks, since they implicitly form an infinitely large ensemble by integrating over the model parameters. For our experiments, we use the fully connected net trained on MNIST from the previous section and compare the sensitivity of the different prediction methods for two kinds of adversarial attacks.
First, we use the untargeted Fast Gradient Sign method xadv = x− η sgn(∇x maxy log p (M)(y|x)) suggested in (Goodfellow et al., 2014), which takes the gradient of the class predicted with maximal probability by method M w.r.t. the input x and reduces this probability with varying step size η. This step size is rescaled by the difference between the maximal and minimal value per dimension in the dataset. It is to be expected that this method generates examples away from the data manifold, as there is no clear subset of the data that corresponds to e.g. ”not ones”.
Fig. 3 shows the average predictive uncertainty and the accuracy on the original class on the MNIST test set as the step size η increases. The Kronecker factored Laplace approximation achieves significantly higher uncertainty than any other prediction method as the images move away from the data. Both the diagonal and the Kronecker factored Laplace maintain higher accuracy than MC Dropout on their original predictions. Interestingly, the deterministic forward pass appears to be most robust in terms of accuracy, however it has much smaller uncertainty on the predictions it makes and will confidently predict a false class for most images, whereas the other methods are more uncertain.
Furthermore, we perform a targeted attack that attempts to force the network to predict a specific class, in our case ‘0’ following (Li & Gal, 2017). Hence, for each method, we exclude all data points in the test set that are already predicted as ‘0’. The updates are of similar form to the untargeted attack, however they increase the probability of the pre-specified class y rather than decreasing the current maximum as x(t+1)y = x (t) y + η sgn(∇x log p (M)(y|x(t)y )), where x (0) y = x.
We use a step size of η=10−2 for the targeted attack. The uncertainty and accuracy on the original and target class are shown in Fig. 4. Here, the Kronecker factored Laplace approximation has slightly smaller uncertainty at its peak in comparison to the other methods, however it appears to be much more robust. It only misclassifies over 50% of the images after about 20 steps, whereas for the other methods this is the case after roughly 10 steps and reaches 100% accuracy on the target class after almost 50 updates, whereas the other methods are fooled on all images after about 25 steps.
In conjunction with the experiment on notMNIST, it appears that the Laplace approximation achieves higher uncertainty than Dropout away from the data, as in the untargeted attack. In the targeted attack it exhibits smaller uncertainty than Dropout, yet it is more robust to having its prediction changed. The diagonal Laplace approximation again performs similarly to Dropout.
5.4 UNCERTAINTY ON MISCLASSIFICATIONS
To highlight the scalability of our method, we apply it to a state-of-the-art convolutional network architecture. Recently, deep residual networks (He et al., 2016a;b) have been the most successful
ones among those. As demonstrated in (Grosse & Martens, 2016), Kronecker factored curvature methods are applicable to convolutional layers by interpreting them as matrix-matrix multiplications.
We compare our uncertainty estimates on wide residual networks (Zagoruyko & Komodakis, 2016), a recent variation that achieved competitive performance on CIFAR100 (Krizhevsky & Hinton, 2009) while, in contrast to most other residual architectures, including Dropout at specific points. While this does not correspond to using Dropout in the Bayesian sense (Gal & Ghahramani, 2015), it allows us to at least compare our method to the uncertainty estimates obtained from Dropout.
We note that it is straightforward to incorporate batch normalisation (Ioffe & Szegedy, 2015) into the curvature backpropagation algorithms, so we apply a standard Laplace approximation to its parameters as well. We are not aware of any interpretation of Dropout as performing Bayesian inference on the parameters of batch normalisation. Further implementation details are in Appendix G.
Again, the accuracy of the prediction methods is comparable (see Table 2 in Appendix F). For calculating the curvature factors, we draw 5, 000 samples per image using the same data augmentation as during training, effectively increasing the dataset size to 2.5×108. The diagonal approximation had to be regularised to the extent of becoming deterministic, so we omit it from the results.
In Fig. 5 we compare the distribution of the predictive uncertainty on the test set.5 We distinguish between the uncertainty on correct and incorrect classifications, as the mistakes of a system used in practice may be less severe if the network can at least indicate that it is uncertain. Thus, high uncertainty on misclassifications and low uncertainty on correct ones would be desirable, such that a system could return control to a human expert when it can not make a confident decision. In general, the network tends to be more uncertain on its misclassifcations than its correct ones regardless of whether it was trained with or without Dropout and of the method used for prediction. Both Dropout and the Laplace approximation similarly increase the uncertainty in the predictions, however this is irrespective of the correctness of the classification. Yet, our experiments show that the Kronecker factored Laplace approximation can be scaled to modern convolutional networks and maintain good classification accuracy while having similar uncertainty about the predictions as Dropout.
We had to use much stronger regularisation for the Laplace approximation on the wide residual network, possibly because the block-diagonal approximation becomes more inaccurate on deep networks, possibly because the number of parameters is much higher relative to the number of data. It would be interesting to see how the Laplace approximations behaves on a much larger dataset like ImageNet for similarly sized networks, where we have a better ratio of data to parameters and curvature directions. However, even on a relatively small dataset like CIFAR we did not have to regularise the Laplace approximation to the degree of the posterior becoming deterministic.
5We use the first 5, 000 images as a validation set to tune the hyperparameters of our Laplace approximation and the final 5, 000 ones for evaluating the predictive uncertainty on all methods.
6 CONCLUSION
We presented a scalable approximation to the Laplace approximation for the posterior of a neural network and provided experimental results suggesting that the uncertainty estimates are on par with current alternatives like Dropout, if not better. It enables practitioners to obtain principled uncertainty estimates from their models, even if they were trained in a maximum likelihood/MAP setting.
There are many possible extensions to this work. One would be to automatically determine the scale and regularisation hyperparameters of the Kronecker factored Laplace approximation using the model evidence similar to how (MacKay, 1992) interpolates between the data log likelihood and the width of the prior. The model evidence could further be used to perform Bayesian model averaging on ensembles of neural networks, potentially improving their generalisation ability and uncertainty estimates. A challenging application would be active learning, where only little data is available relative to the number of curvature directions that need to be estimated.
ACKNOWLEDGEMENTS
This work was supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1. We thank the anonymous reviewers for their feedback and Harshil Shah for his comments on an earlier draft of this paper.
A DERIVATION OF THE ACTIVATION HESSIAN RECURSION
Here, we provide the basic derivation of the factorisation of the diagonal blocks of the Hessian in Eq. 1 and the recursive formula for calculatingH as presented in (Botev et al., 2017). The Hessian of a neural network with parameters θ as defined in the main text has elements:
[H]ij = ∂2
∂θi∂θj E(θ) (10)
For a given layer λ, the gradient w.r.t. a weight Wλa,b is:
∂E ∂Wλa,b = ∑ i ∂hλi ∂Wλa,b ∂E ∂hλi = aλ−1b ∂E ∂hλa (11)
Keeping λ fixed and differentiating again, we find that the per-sample Hessian of that layer is:
[Hλ](a,b),(c,d) ≡ ∂2E
∂Wλa,b∂W λ c,d
= aλ−1b a λ−1 d [Hλ]a,c (12)
where
[Hλ]a,b = ∂2E
∂hλa∂h λ b
(13)
is the pre-activation Hessian.
We can reexpress this in matrix notation as a Kronecker product as in Eq. 1:
Hλ = ∂2E ∂ vec (Wλ)∂ vec (Wλ) = ( aλ−1a T λ−1 ) ⊗Hλ (14)
The pre-activation Hessian can be calculated recursively as:
Hλ = BλW T λ+1Hλ+1Wλ+1Bλ +Dλ (15)
where the diagonal matrices B and D are defined as:
Bλ = diag (f ′ λ(hλ)) (16)
Dλ = diag (f ′′ λ (hλ)
∂E ∂aλ ) (17)
f ′ and f ′′ denote the first and second derivative of the transfer function. The recursion is initialised with the Hessian of the error w.r.t. the linear network outputs.
For further details and on how to calculate the diagonal blocks of the Gauss-Newton and Fisher matrix, we refer the reader to (Botev et al., 2017) and (Martens & Grosse, 2015).
B MATRIX NORMAL DISTRIBUTION
The matrix normal distribution (Gupta & Nagar, 1999) is a multivariate distribution over an entire matrix of shape n × p rather than just a vector. In contrast to the multivariate normal distribution, it is parameterised by two p.s.d. covariance matrices, U : n × n and V : p × p, which indicate the covariance of the rows and columns respectively. In addition it has a mean matrix M : n× p. A vectorised sample from a matrix normal distribution X ∼MN (M,U, V ) corresponds to a sample from a normal distribution vec(X) ∼ N (vec(M), U ⊗ V ). However, samples can be drawn more efficiently as X = M + AZB with Z ∼ MN (0, I, I), and AAT = U and BTB = V . The sample Z corresponds to a sample from a normal distribution of length np that has been reshaped to a n× p matrix. This is more efficient in the sense that we only need to calculate two matrix-matrix products of small matrices, rather than a matrix-vector product with one big one.
C APPROXIMATION OF THE EXPECTED ACTIVATION HESSIAN
While the square root of Qλ is calculated during the forward pass on all layers,H requires an additional backward pass. Strictly speaking, it is not essential to approximate E [H] for the Kronecker factored Laplace approximation, as in contrast to optimisation procedures the curvature only needs to be calculated once and is thus not time critical. For datasets of the scale of ImageNet and the networks used for such datasets, it would still be impractically slow to perform the calculation for every data point individually. Furthermore, as most datasets are augmented during training, e.g. random cropping or reflections of images, the curvature of the network can be estimated using the same augmentations, effectively increasing the size of the dataset by orders of magnitude. Thus, we make use of the minibatch approximation in our experiments — as we make use of data augmentation — in order to demonstrate its practical applicability.
We note that E [H] can be calculated exactly by running KFRA (Botev et al., 2017) with a minibatchsize of one, and then averaging the results. KFAC (Martens & Grosse, 2015), in contrast, stochastically approximates the Fisher matrix, so even when run for every datapoint separately, it cannot calculate the curvature factor exactly.
In the following, we also show figures for the adversarial experiments in which we calculate the curvature per datapoint and without data augmentation:
Fig. 6 and Fig. 7 show how the Laplace approximation with the curvature estimated from 1000 randomly sampled binary MNIST images and the activation Hessian calculated with a minibatch size of 100 performs in comparison to the curvature factor being calculated without any data augmentation with a batch size of 100 or exactly. We note that without data augmentation we had to use much stronger regularisation of the curvature factors, in particular we had to add a non-negligible multiple of the identity to the factors, whereas with data augmentation it was only needed to ensure that the matrices are invertible. The Kronecker factored Laplace approximation reaches particularly high uncertainty on the untargeted adversarial attack and is most robust on the targeted attack when using data augmentation, suggesting that it is particularly well suited for large datasets and ones
where some form of data augmentation can be applied. The difference between approximating the activation Hessian over a minibatch and calculating it exactly appears to be negligible.
D MEMORY AND COMPUTATIONAL REQUIREMENTS
If we denote the dimensionality of the input to layer λ as Dλ−1 and its output as Dλ, the curvature factors correspond to the two precision matrices with Dλ−1(Dλ−1+1)2 and Dλ(Dλ+1) 2 ‘parameters’ to estimate, since they are symmetric. So across a network, the number of curvature directions that we are estimating grows linearly in the number of layers and quadratically in the dimension of the layers, i.e. the number of columns of the weight matrices. The size of the full Hessian, on the other hand, grows quadratically in the number of layers and with the fourth power in the dimensionality of the layers (assuming they are all the same size).
Once the curvature factors are calculated, which only needs to be done once, we use their Cholesky decomposition to solve two triangular linear systems when sampling weights from the matrix normal distribution. We use the same weight samples for each minibatch, i.e. we do not sample a weight matrix per datapoint. This is for computational efficiency and does not change the expectation.
One possibility to save computation time would be to sample a fixed set of weight matrices from the approximate posterior — in order to avoid solving the linear system on every forward pass — and treat the networks that they define as an ensemble. The individual ensemble members can be evaluated in parallel and their outputs averaged, which can be done with a small overhead over evaluating a single network given sufficient compute resources. A further speed up can be achieved by distilling the predictive distributions of the Laplace network into a smaller, deterministic feedforward network as successfully demonstrated in (Balan et al., 2015) for posterior samples using HMC.
E COMPLEMENTARY FIGURES FOR THE TOY DATASET
Fig. 8 shows the different Laplace approximations (Kronecker factored, diagonal, full) from the main text without any hyperparameter tuning. The figure of the uncertainty obtained from samples using HMC is repeated. Note that the scale is larger than in the main text due to the high uncertainty of the Laplace approximations.
The Laplace approximations are increasingly uncertain away from the data, as the true posterior estimated from HMC samples, however they all overestimate the uncertainty without regularisation. This is easy to fix by optimising the hyperparameters on a validation set as discussed in the main text, resulting in posterior uncertainty much more similar to the true posterior. As previously discussed in (Botev et al., 2017), the Hessian of a neural network is usually underdetermined as the number of data points is much smaller than the number of parameters — in our case we have 20 data points to estimate a 78×78 precision matrix. This leads to the full Laplace approximation vastly overestimating the uncertainty and a bad predictive mean. Both the Kronecker factored and the diagonal approximation exhibit smaller variance than the full Laplace approximation as they restrict the structure of the precision matrix. Consistently with the other experiments, we find the diagonal
Laplace approximation to place more mass in low probability areas of the posterior than the Kronecker factored approximation, resulting in higher variance on the regression problem. This leads to a need for greater regularisation of the diagonal approximation to obtain acceptable predictive performance, and underestimating the uncertainty.
F PREDICTION ACCURACY
This section shows the accuracy values obtained from the different predictions methods on the feedforward networks for MNIST and the wide residual network for CIFAR100. The results for MNIST are shown in Table 1 and the results for CIFAR in Table 2.
In all cases, neither MC Dropout nor the Laplace approximation significantly change the classification accuracy of the network in comparison to a deterministic forward pass.
G IMPLEMENTATION DETAILS FOR RESIDUAL NETWORKS
Our wide residual network has n=3 block repetitions and a width factor of k=8 on CIFAR100 with and without Dropout using hyperparameters taken from (Zagoruyko & Komodakis, 2016): the network parameters are trained on a cross-entropy loss using Nesterov momentum with an initial learning rate of 0.1 and momentum of 0.9 for 200 epochs with a minibatch size of 128. We decay the learning rate every 50 epochs by a factor of 0.2, which is slightly different to the schedule used in (Zagoruyko & Komodakis, 2016) (they decay after 60, 120 and 160 epochs). As the original authors, we use L2-regularisation with a factor of 5×10 −4.
We make one small modification to the architecture: instead of downsampling with 1×1 convolutions with stride 2, we use 2×2 convolutions. This is due to Theano not supporting the transformation of images into the patches extracted by a convolution for 1×1 convolutions with stride greater than 1, which we require for our curvature backpropagation through convolutions.
We apply a standard Laplace approximation to the batch normalisation parameters — a Kronecker factorisation is not needed, since the parameters are one-dimensional. When calculating the curvature factors, we use the moving averages for the per-layer means and standard deviations obtained after training, in order to maintain independence between the data points in a minibatch.
We need to make a further approximation to the ones discussed in Section 2.2 when backpropagating the curvature for residual networks. The residual blocks compute a function of the form res(x) = x + fφ(x), where fφ typically is a sequence of convolutions, batch normalisation and elementwise nonlinearities. This means that we would need to pass back two curvature matrices, one for each summand. However, this would double the number of backpropagated matrices for each residual connection, hence the computation time/memory requirements would grow exponentially in the number of residual blocks. Therefore, we simply add the curvature matrices after each residual connection. | 1. What is the main contribution of the paper regarding deep network parameter estimation?
2. How does the proposed method build upon or relate to prior works such as Botev et al. (ICML 2017)?
3. What are the strengths and weaknesses of the experimental evaluation, and how could it be improved by comparing with other methods like factorized variational inference (Graves, 2011)?
4. What limitations does the reviewer perceive regarding the independence assumption across layers in the proposed approximation strategy, and how could it be addressed or generalized? | Review | Review
This paper proposes a Laplace approximation to approximate the posterior distribution over the parameters of deep networks.
The idea is interesting and the realization of the paper is good. The idea builds upon previous work in scalable Gauss-Newton methods for optimization in deep networks, notably Botev et al., ICML 2017. In this respect, I think that the novelty in the current submission is limited, as the approximation is essentially what proposed in Botev et al., ICML 2017. The Laplace approximation requires the Hessian of the posterior, so techniques developed for Gauss-Newton optimization can straightforwardly be applied to construct Laplace approximations.
Having said that, the experimental evaluation is quite interesting and in-depth. I think it would have been interesting to report comparisons with factorized variational inference (Graves, 2011) as it is a fairly standard and widely adopted in Bayesian deep learning. This would have been an interesting way to support the claims on the poor approximation offered by standard variational inference.
I believe that the independence assumption across layers is a limiting factor of the proposed approximation strategy. Intuitively, changes in the weights in a given layer should affect the weights in other layers, so I would expect the posterior distribution over all the weights to reflect this through correlations across layers. I wonder how these results can be generalized to relax the independence assumption. |
ICLR | Title
A Scalable Laplace Approximation for Neural Networks
Abstract
We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture.
1 INTRODUCTION
Neural networks are most commonly trained in a maximum a posteriori (MAP) setting, which only yields point estimates of the parameters, ignoring any uncertainty about them. This often leads to overconfident predictions, especially in regimes that are weakly covered by training data or far away from the data manifold. While the confidence of wrong predictions is usually irrelevant in a research context, it is essential that a Machine Learning algorithm knows when it does not know in the real world, as the consequences of mistakes can be fatal, be it when driving a car or diagnosing a disease.
The Bayesian framework of statistics provides a principled way for avoiding overconfidence in the parameters by treating them as unknown quantities and integrating over all possible values. Specifically, for the prediction of new data under a model, it fits a posterior distribution over the parameters given the training data and weighs the contribution of each setting of the parameters to the prediction by the probability of the data under those parameters times their prior probability. However, the posterior of neural networks is usually intractable due to their size and nonlinearity.
There has been previous interest in integrating neural networks into the Bayesian framework (MacKay, 1992; Hinton & Van Camp, 1993; Neal, 1993; Barber & Bishop, 1998), however these approaches were designed for small networks by current standards. Recent adaptations to architectures of modern scale rely on crude approximations of the posterior to become tractable. All of (Graves, 2011; Hernández-Lobato & Adams, 2015; Blundell et al., 2015) assume independence between the individual weights. While they achieve good results on small datasets, this strong restriction of the posterior is susceptible to underestimating the uncertainty, in particular when optimising the variational bound. The approach in (Gal & Ghahramani, 2016) requires the use of certain stochastic regularisers which are not commonly present in most recent architectures. Furthermore, it is not clear if the approximate posterior defined by these regularisers is a good fit to the true posterior.
Recent work on second-order optimisation of neural networks (Martens & Grosse, 2015; Botev et al., 2017) has demonstrated that the diagonal blocks of the curvature can be well approximated by a Kronecker product. We combine this insight with the idea of modelling the posterior over the weights as a Gaussian, using a Laplace approximation (MacKay, 1992) with Kronecker factored covariance matrices. This leads to a computationally efficient matrix normal posterior distribution
∗Corresponding author: j.ritter@cs.ucl.ac.uk
(Gupta & Nagar, 1999) over the weights of every layer. Since the Laplace approximation is applied after training, our approach can be used to obtain uncertainty estimates from existing networks.
2 THE CURVATURE OF NEURAL NETWORKS
Our method is inspired by recent Kronecker factored approximations of the curvature of a neural network (Martens & Grosse, 2015; Botev et al., 2017) for optimisation and we give a high-level review of these in the following. While the two methods approximate the Gauss-Newton and Fisher matrix respectively, as they are guaranteed to be positive semi-definite (p.s.d.), we base all of our discussion on the Hessian in order to be as general as possible.
2.1 NEURAL NETWORK NOTATION
We denote a feedforward network as taking an input a0 = x and producing an output hL. The intermediate representations for layers λ = 1, ..., L are denoted as hλ = Wλaλ−1 and aλ = fλ(hλ). We refer to aλ as the activations, and hλ as the (linear) pre-activations. The bias terms are absorbed into the Wλ by appending a 1 to each aλ. The network parameters are optimised w.r.t. an error function E(y, hL) for targets y. Most commonly used error functions, such as squared error and categorical cross-entropy, can be interpreted as exponential family negative log likelihoods − log p(y|hL).
2.2 KRONECKER FACTORED SECOND-ORDER OPTIMISATION
Traditional second-order methods use either the Hessian matrix or a positive semi-definite approximation thereof to generate parameter updates of the form ∆ = C−1g, where C is the chosen curvature matrix and g the gradient of the error function parameterised by the network. However, this curvature matrix is infeasbile to compute for modern neural networks as their number of parameters is often in the millions, rendering the size of C of the order of several terabytes.
Recent work (Martens & Grosse, 2015; Botev et al., 2017) exploits that, for a single data point, the diagonal blocks of these curvature matrices are Kronecker factored:
Hλ = ∂2E
∂ vec(Wλ)∂ vec(Wλ) = Qλ ⊗Hλ (1)
where Hλ is the Hessian w.r.t. the weights in layer λ. Qλ = aλ−1a T λ−1 denotes the covariance of the incoming activations aλ−1 and Hλ = ∂ 2 E
∂hλ∂hλ the pre-activation Hessian, i.e. the Hessian of the
error w.r.t. the linear pre-activations hλ in a layer. We provide the derivation for this result as well as the recursion for calculatingH in Appendix A. The Kronecker factorisation holds two key advantages: the matrices that need be computed and stored are much smaller — if we assume all layers to be of dimensionality D, the two factors are each of sizeD2, whereas the full Hessian for the weights of only one layer would haveD4 elements. Furthermore, the inverse of a Kronecker product is equal to the Kronecker product of the inverses, so it is only necessary to invert those two moderately sized matrices.
In order to maintain this structure over a minibatch of data, all Kronecker factored second-order methods make two core approximations: First, they only model the diagonal blocks corresponding to the weights of a layer, such that the curvature decomposes into L independent matrices. Second, they assume Qλ andHλ to be independent. This is in order to maintain the Kronecker factorisation in expectation, i.e. E [Qλ ⊗Hλ] ≈ E [Qλ] ⊗ E [Hλ], since the expectation of a Kronecker product is not guaranteed to be Kronecker factored itself.
The main difference between the Kronecker factored second-order optimisers lies in how they efficiently approximate E [Hλ]. For exact calculation, it would be necessary to pass back an entire matrix per data point in a minibatch, which imposes infeasible memory and computational requirements. KFRA (Botev et al., 2017) simply passes back the expectation at every layer, while KFAC (Martens & Grosse, 2015) utilises the Fisher identity to only propagate a vector rather than a matrix, approximating the Kronecker factors with a stochastic rank-one matrix for each data point.
The diagonal blocks of the Hessian and Gauss-Newton matrix are equal for neural networks with piecewise linear activation functions (Botev et al., 2017), thus both methods can be used to directly approximate the diagonal blocks of the Hessian of such networks, as the Gauss-Newton and Fisher are equivalent for networks that parameterise an exponential family log likelihood.
3 A SCALABLE LAPLACE APPROXIMATION FOR NEURAL NETWORKS
3.1 THE LAPLACE APPROXIMATION
The standard Laplace approximation is obtained by taking the second-order Taylor expansion around a mode of a distribution. For a neural network, such a mode can be found using standard gradientbased methods. Specifically, if we approximate the log posterior over the weights of a network given some data D around a MAP estimate θ∗, we obtain:
log p(θ|D) ≈ log p(θ∗|D)− 1 2 (θ − θ∗)TH̄(θ − θ∗) (2)
where θ = [vec(W1), ..., vec(WL)] is the stacked vector of weights and H̄ = E [H] the average Hessian of the negative log posterior1. The first order term is missing because we expand the function around a maximum θ∗, where the gradient is zero. If we exponentiate this equation, it is easy to notice that the right-hand side is of Gaussian functional form for θ, thus we obtain a normal distribution by integrating over it. The posterior over the weights is then approximated as Gaussian:
θ ∼ N (θ∗, H̄−1) (3)
assuming H̄ is p.s.d. We can then approximate the posterior mean when predicting on unseen data D∗ by averaging the predictions of T Monte Carlo samples θ(t) from the approximate posterior:
p(D∗|D) = ∫ p(D∗|θ)p(θ|D)dθ ≈ 1
T T∑ t=1 p(D∗|θ(t)) (4)
3.2 DIAGONAL LAPLACE APPROXIMATION
Unfortunately, it is not feasible to compute or invert the Hessian matrix w.r.t. all of the weights jointly. An approximation that is easy to compute in modern automatic differentiation frameworks is the diagonal of the Fisher matrix F , which is simply the expectation of the squared gradients:
H ≈ diag(F ) = diag(E [ ∇θ log p(y|x)∇θ log p(y|x) T ] ) = diag(E [ (∇θ log p(y|x)) 2 ] ) (5)
where diag extracts the diagonal of a matrix or turns a vector into a diagonal matrix. Such diagonal approximations to the curvature of a neural network have been used successfully for pruning the weights (LeCun et al., 1990) and, more recently, for transfer learning (Kirkpatrick et al., 2017).
This corresponds to modelling the weights with a Normal distribution with diagonal covariance:
vec(Wλ) ∼ N (vec(W ∗ λ ),diag(Fλ) −1) for λ = 1, . . . , L (6)
Unfortunately, even if the Taylor approximation is accurate, this will place significant probability mass in low probability areas of the true posterior if some weights exhibit high covariance.
1The average Hessian is typically scaled by the number of data points N . In order to keep the notation uncluttered, we develop our basic methods in terms of the average Hessian and discuss the scaling separately.
3.3 KRONECKER FACTORED LAPLACE APPROXIMATION
So while it is desirable to model the covariance between the weights, some approximations are needed in order to remain computationally efficient. First, we assume the weights of the different layers to be independent. This corresponds to the block-diagonal approximation in KFAC and KFRA, which empirically preserves sufficient information about the curvature to obtain competitive optimisation performance. For our purposes this means that our posterior factorises over the layers.
As discussed above, the Hessian of the log-likelihood for a single datapoint is Kronecker factored, and we denote the two factor matrices as Hλ = Qλ ⊗ Hλ.
2 By further assuming independence between Q andH in all layers, we can approximate the expected Hessian of each layer as:
E [Hλ] = E [Qλ ⊗Hλ] ≈ E [Qλ]⊗ E [Hλ] (7)
Hence, the Hessian of every layer is Kronecker factored over an entire dataset and the Laplace approximation can be approximated by a product of Gaussians. Each Gaussian has a Kronecker factored covariance, corresponding to a matrix normal distribution (Gupta & Nagar, 1999), which considers the two Kronecker factors of the covariance to be the covariances of the rows and columns of a matrix. The two factors are much smaller than the full covariance and allow for significantly more efficient inversion and sampling (we review the matrix normal distribution in Appendix B).
Our resulting posterior for the weights in layer λ is then:
Wλ ∼MN (W ∗ λ , Q̄ −1 λ , H̄ −1 λ ) (8)
In contrast to optimisation methods, we do not need to approximate E [Hλ] as it is only calculated once. However, when it is possible to augment the data (e.g. randomised cropping of images), it may be advantageous. We provide a more detailed discussion of this in Appendix C.
3.4 INCORPORATING THE PRIOR AND REGULARISING THE CURVATURE FACTORS
Just as the log posterior, the Hessian decomposes into a term depending on the data log likelihood and one on the prior. For the commonly used L2-regularisation, corresponding to a Gaussian prior, the Hessian is equal to the precision of the prior times the identity matrix. We approximate this by adding a multiple of the identity to each of the Kronecker factors from the log likelihood:
Hλ = N E
[ −∂
2 log p(D|θ) ∂θ2
] + τI ≈ ( √ N E [Qλ] + √ τI)⊗ ( √ N E [Hλ] + √ τI) (9)
where τ is the precision of the Gaussian prior on the weights andN the size of the dataset. However, we can also treat them as hyperparameters and optimise them w.r.t. the predictive performance on a validation set. We emphasise that this can be done without retraining the network, so it does not impose a large computational overhead and is trivial to parallelise.
Setting N to a larger value than the size of the dataset can be interpreted as including duplicates of the data points as pseudo-observations. Adding a multiple of the uncertainty to the precision matrix decreases the uncertainty about each parameter. This has a regularising effect both on our approximation to the true Laplace, which may be overestimating the variance in certain directions due to ignoring the covariances between the layers, as well as the Laplace approximation itself, which may be placing probability mass in low probability areas of the true posterior.
4 RELATED WORK
Most recent attempts to approximating the posterior of a neural network are based on formulating an approximate distribution to the posterior and optimising the variational lower bound w.r.t. its
2We assume a uniform prior for now, such that the Hessians of the posterior and the log likelihood are equal. We discuss how we incorporate a non-zero Hessian of a prior into the Kronecker factors in the next section.
parameters. (Graves, 2011; Blundell et al., 2015; Kingma et al., 2015) as well as the expectation propagation based approaches of (Hernández-Lobato & Adams, 2015) and (Ghosh et al., 2016) assume independence between the individual weights which, particularly when optimising the KL divergence, often lets the model underestimate the uncertainty about the weights. Gal & Ghahramani (2016) interpret Dropout to approximate the posterior with a mixture of delta functions, assuming independence between the columns. (Lakshminarayanan et al., 2016) suggest using an ensemble of networks for estimating the uncertainty.
Our work is a scalable approximation of (MacKay, 1992). Since the per-layer Hessian of a neural network is infeasible to compute, we suggest a factorisation of the covariance into a Kronecker product, leading to a more efficient matrix normal distribution. The posterior that we obtain is reminiscent of (Louizos & Welling, 2016) and (Sun et al., 2017), who optimise the parameters of a matrix normal distribution as their weights, which requires a modification of the training procedure.
5 EXPERIMENTS
Since the Laplace approximation is a method for predicting in a Bayesian manner and not for training, we focus on comparing to uncertainty estimates obtained from Dropout (Gal & Ghahramani, 2016). The trained networks will be identical, but the prediction methods will differ. We also compare to a diagonal Laplace approximation to highlight the benefit from modelling the covariances between the weights. All experiments are implemented using Theano (Theano Development Team, 2016) and Lasagne (Dieleman et al., 2015).3
5.1 TOY REGRESSION DATASET
As a first experiment, we visualise the uncertainty obtained from the Laplace approximations on a toy regression dataset, similar to (Hernández-Lobato & Adams, 2015). We create a dataset of 20 uniformly distributed points x ∼ U(−4, 4) and sample y ∼ N (x3, 32). In contrast to (HernándezLobato & Adams, 2015), we use a two-layer network with seven units per layer rather than one layer with 100 units. This is because both the input and output are one-dimensional, hence the weight matrices are vectors and the matrix normal distribution reduces to a multivariate normal distribution. Furthermore, the Laplace approximation is sensitive to the ratio of the number of data points to parameters, and we want to visualise it both with and without hyperparameter tuning.
Fig. 1 shows the uncertainty obtained from the Kronecker factored and diagonal Laplace approximation applied to the same network, as well as from a full Laplace approximation and 50, 000 HMC (Neal, 1993) samples. The latter two methods are feasible only for such a small model and dataset. For the diagonal and full Laplace approximation we use the Fisher identity and draw one sample per data point. We set the hyperparameters of the Laplace approximations (see Section 3.4) using a grid search over the likelihood of 20 validation points that are sampled the same way as the training set.
3We make our fork available at: https://github.com/BB-UCL/Lasagne
The regularised Laplace approximations all give an overall good fit to the HMC predictive posterior. Their uncertainty is slightly higher close to the training data and increases more slowly away from the data than that of the HMC posterior. The diagonal and full Laplace approximation require stronger regularisation than our Kronecker factored one, as they have higher uncertainty when not regularised. In particular the full Laplace approximation vastly overestimates the uncertainty without additional regularisation, leading to a bad predictive mean (see Appendix E for the corresponding figures), as the Hessian of the log likelihood is underdetermined. This is commonly the case in deep learning, as the number of parameters is typically much larger than the number of data points. Hence restricting the structure of the covariance is not only a computational necessity for most architectures, but also allows for more precise estimation of the approximate covariance.
5.2 OUT-OF-DISTRIBUTION UNCERTAINTY
For a more realistic test, similar to (Louizos & Welling, 2017), we assess the uncertainty of the predictions when classifying data from a different distribution than the training data. For this we train a network with two layers of 1024 hidden units and ReLU transfer functions to classify MNIST digits. We use a learning rate of 10−2 and momentum of 0.9 for 250 epochs. We apply Dropout with p=0.5 after each inner layer, as our chief interest is to compare against its uncertainty estimates. We further use L2-regularisation with a factor of 10
−2 and randomly binarise the images during training according to their pixel intensities and draw 1, 000 such samples per datapoint for estimating the curvature factors. We use this network to classify the images in the notMNIST dataset4, which contains 28×28 grey-scale images of the letters ‘A’ to ‘J’ from various computer fonts, i.e. not digits. An ideal classifier would make uniform predictions over its classes.
We compare the uncertainty obtained by predicting the digit class of the notMNIST images using 1. a deterministic forward pass through the Dropout trained network, 2. by sampling different Dropout masks and averaging the predictions, and by sampling different weight matrices from 3. the matrix normal distribution obtained from our Kronecker factored Laplace approximation as well as 4. the diagonal one. As an additional baseline similar to (Blundell et al., 2015; Graves, 2011), we compare to a network with identical architecture with a fully factorised Gaussian (FFG) approximate posterior on the weights and a standard normal prior. We train the model on the variational lower bound using the reparametrisation trick (Kingma & Welling, 2013). We use 100 samples for the stochastic forward passes and optimise the hyperparameters of the Laplace ap-
proximations w.r.t. the cross-entropy on the validation set of MNIST.
We measure the uncertainty of the different methods as the entropy of the predictive distribution, which has a minimal value of 0 when a single class is predicted with certainty and a maximum of about 2.3 for uniform predictions. Fig. 2 shows the inverse empirical cumulative distribution of the entropy values obtained from the four methods. Consistent with the results in (Gal & Ghahramani, 2016), averaging the probabilities of multiple passes through the network yields predictions with higher uncertainty than a deterministic pass that approximates the geometric average (Srivastava et al., 2014). However, there still are some images that are predicted to be a digit with certainty. Our Kronecker factored Laplace approximation makes hardly any predictions with absolute certainty and assigns high uncertainty to most of the letters as desired. The diagonal Laplace approximation required stronger regularisation towards predicting deterministically, yet it performs similarly to Dropout. As shown in Table 1, however, the network makes predictions on the test set of MNIST
4From: http://yaroslavvb.blogspot.nl/2011/09/notmnist-dataset.html
with similar accuracy to the deterministic forward pass and MC Dropout when using our approximation. The variational factorised Gaussian posterior has low uncertainty as expected.
5.3 ADVERSARIAL EXAMPLES
To further test the robustness of our prediction method close to the data distribution, we perform an adversarial attack on a neural network. As first demonstrated in (Szegedy et al., 2013), neural networks are prone to being fooled by gradient-based changes to their inputs. Li & Gal (2017) suggest, and provide empirical support, that Bayesian models may be more robust to such attacks, since they implicitly form an infinitely large ensemble by integrating over the model parameters. For our experiments, we use the fully connected net trained on MNIST from the previous section and compare the sensitivity of the different prediction methods for two kinds of adversarial attacks.
First, we use the untargeted Fast Gradient Sign method xadv = x− η sgn(∇x maxy log p (M)(y|x)) suggested in (Goodfellow et al., 2014), which takes the gradient of the class predicted with maximal probability by method M w.r.t. the input x and reduces this probability with varying step size η. This step size is rescaled by the difference between the maximal and minimal value per dimension in the dataset. It is to be expected that this method generates examples away from the data manifold, as there is no clear subset of the data that corresponds to e.g. ”not ones”.
Fig. 3 shows the average predictive uncertainty and the accuracy on the original class on the MNIST test set as the step size η increases. The Kronecker factored Laplace approximation achieves significantly higher uncertainty than any other prediction method as the images move away from the data. Both the diagonal and the Kronecker factored Laplace maintain higher accuracy than MC Dropout on their original predictions. Interestingly, the deterministic forward pass appears to be most robust in terms of accuracy, however it has much smaller uncertainty on the predictions it makes and will confidently predict a false class for most images, whereas the other methods are more uncertain.
Furthermore, we perform a targeted attack that attempts to force the network to predict a specific class, in our case ‘0’ following (Li & Gal, 2017). Hence, for each method, we exclude all data points in the test set that are already predicted as ‘0’. The updates are of similar form to the untargeted attack, however they increase the probability of the pre-specified class y rather than decreasing the current maximum as x(t+1)y = x (t) y + η sgn(∇x log p (M)(y|x(t)y )), where x (0) y = x.
We use a step size of η=10−2 for the targeted attack. The uncertainty and accuracy on the original and target class are shown in Fig. 4. Here, the Kronecker factored Laplace approximation has slightly smaller uncertainty at its peak in comparison to the other methods, however it appears to be much more robust. It only misclassifies over 50% of the images after about 20 steps, whereas for the other methods this is the case after roughly 10 steps and reaches 100% accuracy on the target class after almost 50 updates, whereas the other methods are fooled on all images after about 25 steps.
In conjunction with the experiment on notMNIST, it appears that the Laplace approximation achieves higher uncertainty than Dropout away from the data, as in the untargeted attack. In the targeted attack it exhibits smaller uncertainty than Dropout, yet it is more robust to having its prediction changed. The diagonal Laplace approximation again performs similarly to Dropout.
5.4 UNCERTAINTY ON MISCLASSIFICATIONS
To highlight the scalability of our method, we apply it to a state-of-the-art convolutional network architecture. Recently, deep residual networks (He et al., 2016a;b) have been the most successful
ones among those. As demonstrated in (Grosse & Martens, 2016), Kronecker factored curvature methods are applicable to convolutional layers by interpreting them as matrix-matrix multiplications.
We compare our uncertainty estimates on wide residual networks (Zagoruyko & Komodakis, 2016), a recent variation that achieved competitive performance on CIFAR100 (Krizhevsky & Hinton, 2009) while, in contrast to most other residual architectures, including Dropout at specific points. While this does not correspond to using Dropout in the Bayesian sense (Gal & Ghahramani, 2015), it allows us to at least compare our method to the uncertainty estimates obtained from Dropout.
We note that it is straightforward to incorporate batch normalisation (Ioffe & Szegedy, 2015) into the curvature backpropagation algorithms, so we apply a standard Laplace approximation to its parameters as well. We are not aware of any interpretation of Dropout as performing Bayesian inference on the parameters of batch normalisation. Further implementation details are in Appendix G.
Again, the accuracy of the prediction methods is comparable (see Table 2 in Appendix F). For calculating the curvature factors, we draw 5, 000 samples per image using the same data augmentation as during training, effectively increasing the dataset size to 2.5×108. The diagonal approximation had to be regularised to the extent of becoming deterministic, so we omit it from the results.
In Fig. 5 we compare the distribution of the predictive uncertainty on the test set.5 We distinguish between the uncertainty on correct and incorrect classifications, as the mistakes of a system used in practice may be less severe if the network can at least indicate that it is uncertain. Thus, high uncertainty on misclassifications and low uncertainty on correct ones would be desirable, such that a system could return control to a human expert when it can not make a confident decision. In general, the network tends to be more uncertain on its misclassifcations than its correct ones regardless of whether it was trained with or without Dropout and of the method used for prediction. Both Dropout and the Laplace approximation similarly increase the uncertainty in the predictions, however this is irrespective of the correctness of the classification. Yet, our experiments show that the Kronecker factored Laplace approximation can be scaled to modern convolutional networks and maintain good classification accuracy while having similar uncertainty about the predictions as Dropout.
We had to use much stronger regularisation for the Laplace approximation on the wide residual network, possibly because the block-diagonal approximation becomes more inaccurate on deep networks, possibly because the number of parameters is much higher relative to the number of data. It would be interesting to see how the Laplace approximations behaves on a much larger dataset like ImageNet for similarly sized networks, where we have a better ratio of data to parameters and curvature directions. However, even on a relatively small dataset like CIFAR we did not have to regularise the Laplace approximation to the degree of the posterior becoming deterministic.
5We use the first 5, 000 images as a validation set to tune the hyperparameters of our Laplace approximation and the final 5, 000 ones for evaluating the predictive uncertainty on all methods.
6 CONCLUSION
We presented a scalable approximation to the Laplace approximation for the posterior of a neural network and provided experimental results suggesting that the uncertainty estimates are on par with current alternatives like Dropout, if not better. It enables practitioners to obtain principled uncertainty estimates from their models, even if they were trained in a maximum likelihood/MAP setting.
There are many possible extensions to this work. One would be to automatically determine the scale and regularisation hyperparameters of the Kronecker factored Laplace approximation using the model evidence similar to how (MacKay, 1992) interpolates between the data log likelihood and the width of the prior. The model evidence could further be used to perform Bayesian model averaging on ensembles of neural networks, potentially improving their generalisation ability and uncertainty estimates. A challenging application would be active learning, where only little data is available relative to the number of curvature directions that need to be estimated.
ACKNOWLEDGEMENTS
This work was supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1. We thank the anonymous reviewers for their feedback and Harshil Shah for his comments on an earlier draft of this paper.
A DERIVATION OF THE ACTIVATION HESSIAN RECURSION
Here, we provide the basic derivation of the factorisation of the diagonal blocks of the Hessian in Eq. 1 and the recursive formula for calculatingH as presented in (Botev et al., 2017). The Hessian of a neural network with parameters θ as defined in the main text has elements:
[H]ij = ∂2
∂θi∂θj E(θ) (10)
For a given layer λ, the gradient w.r.t. a weight Wλa,b is:
∂E ∂Wλa,b = ∑ i ∂hλi ∂Wλa,b ∂E ∂hλi = aλ−1b ∂E ∂hλa (11)
Keeping λ fixed and differentiating again, we find that the per-sample Hessian of that layer is:
[Hλ](a,b),(c,d) ≡ ∂2E
∂Wλa,b∂W λ c,d
= aλ−1b a λ−1 d [Hλ]a,c (12)
where
[Hλ]a,b = ∂2E
∂hλa∂h λ b
(13)
is the pre-activation Hessian.
We can reexpress this in matrix notation as a Kronecker product as in Eq. 1:
Hλ = ∂2E ∂ vec (Wλ)∂ vec (Wλ) = ( aλ−1a T λ−1 ) ⊗Hλ (14)
The pre-activation Hessian can be calculated recursively as:
Hλ = BλW T λ+1Hλ+1Wλ+1Bλ +Dλ (15)
where the diagonal matrices B and D are defined as:
Bλ = diag (f ′ λ(hλ)) (16)
Dλ = diag (f ′′ λ (hλ)
∂E ∂aλ ) (17)
f ′ and f ′′ denote the first and second derivative of the transfer function. The recursion is initialised with the Hessian of the error w.r.t. the linear network outputs.
For further details and on how to calculate the diagonal blocks of the Gauss-Newton and Fisher matrix, we refer the reader to (Botev et al., 2017) and (Martens & Grosse, 2015).
B MATRIX NORMAL DISTRIBUTION
The matrix normal distribution (Gupta & Nagar, 1999) is a multivariate distribution over an entire matrix of shape n × p rather than just a vector. In contrast to the multivariate normal distribution, it is parameterised by two p.s.d. covariance matrices, U : n × n and V : p × p, which indicate the covariance of the rows and columns respectively. In addition it has a mean matrix M : n× p. A vectorised sample from a matrix normal distribution X ∼MN (M,U, V ) corresponds to a sample from a normal distribution vec(X) ∼ N (vec(M), U ⊗ V ). However, samples can be drawn more efficiently as X = M + AZB with Z ∼ MN (0, I, I), and AAT = U and BTB = V . The sample Z corresponds to a sample from a normal distribution of length np that has been reshaped to a n× p matrix. This is more efficient in the sense that we only need to calculate two matrix-matrix products of small matrices, rather than a matrix-vector product with one big one.
C APPROXIMATION OF THE EXPECTED ACTIVATION HESSIAN
While the square root of Qλ is calculated during the forward pass on all layers,H requires an additional backward pass. Strictly speaking, it is not essential to approximate E [H] for the Kronecker factored Laplace approximation, as in contrast to optimisation procedures the curvature only needs to be calculated once and is thus not time critical. For datasets of the scale of ImageNet and the networks used for such datasets, it would still be impractically slow to perform the calculation for every data point individually. Furthermore, as most datasets are augmented during training, e.g. random cropping or reflections of images, the curvature of the network can be estimated using the same augmentations, effectively increasing the size of the dataset by orders of magnitude. Thus, we make use of the minibatch approximation in our experiments — as we make use of data augmentation — in order to demonstrate its practical applicability.
We note that E [H] can be calculated exactly by running KFRA (Botev et al., 2017) with a minibatchsize of one, and then averaging the results. KFAC (Martens & Grosse, 2015), in contrast, stochastically approximates the Fisher matrix, so even when run for every datapoint separately, it cannot calculate the curvature factor exactly.
In the following, we also show figures for the adversarial experiments in which we calculate the curvature per datapoint and without data augmentation:
Fig. 6 and Fig. 7 show how the Laplace approximation with the curvature estimated from 1000 randomly sampled binary MNIST images and the activation Hessian calculated with a minibatch size of 100 performs in comparison to the curvature factor being calculated without any data augmentation with a batch size of 100 or exactly. We note that without data augmentation we had to use much stronger regularisation of the curvature factors, in particular we had to add a non-negligible multiple of the identity to the factors, whereas with data augmentation it was only needed to ensure that the matrices are invertible. The Kronecker factored Laplace approximation reaches particularly high uncertainty on the untargeted adversarial attack and is most robust on the targeted attack when using data augmentation, suggesting that it is particularly well suited for large datasets and ones
where some form of data augmentation can be applied. The difference between approximating the activation Hessian over a minibatch and calculating it exactly appears to be negligible.
D MEMORY AND COMPUTATIONAL REQUIREMENTS
If we denote the dimensionality of the input to layer λ as Dλ−1 and its output as Dλ, the curvature factors correspond to the two precision matrices with Dλ−1(Dλ−1+1)2 and Dλ(Dλ+1) 2 ‘parameters’ to estimate, since they are symmetric. So across a network, the number of curvature directions that we are estimating grows linearly in the number of layers and quadratically in the dimension of the layers, i.e. the number of columns of the weight matrices. The size of the full Hessian, on the other hand, grows quadratically in the number of layers and with the fourth power in the dimensionality of the layers (assuming they are all the same size).
Once the curvature factors are calculated, which only needs to be done once, we use their Cholesky decomposition to solve two triangular linear systems when sampling weights from the matrix normal distribution. We use the same weight samples for each minibatch, i.e. we do not sample a weight matrix per datapoint. This is for computational efficiency and does not change the expectation.
One possibility to save computation time would be to sample a fixed set of weight matrices from the approximate posterior — in order to avoid solving the linear system on every forward pass — and treat the networks that they define as an ensemble. The individual ensemble members can be evaluated in parallel and their outputs averaged, which can be done with a small overhead over evaluating a single network given sufficient compute resources. A further speed up can be achieved by distilling the predictive distributions of the Laplace network into a smaller, deterministic feedforward network as successfully demonstrated in (Balan et al., 2015) for posterior samples using HMC.
E COMPLEMENTARY FIGURES FOR THE TOY DATASET
Fig. 8 shows the different Laplace approximations (Kronecker factored, diagonal, full) from the main text without any hyperparameter tuning. The figure of the uncertainty obtained from samples using HMC is repeated. Note that the scale is larger than in the main text due to the high uncertainty of the Laplace approximations.
The Laplace approximations are increasingly uncertain away from the data, as the true posterior estimated from HMC samples, however they all overestimate the uncertainty without regularisation. This is easy to fix by optimising the hyperparameters on a validation set as discussed in the main text, resulting in posterior uncertainty much more similar to the true posterior. As previously discussed in (Botev et al., 2017), the Hessian of a neural network is usually underdetermined as the number of data points is much smaller than the number of parameters — in our case we have 20 data points to estimate a 78×78 precision matrix. This leads to the full Laplace approximation vastly overestimating the uncertainty and a bad predictive mean. Both the Kronecker factored and the diagonal approximation exhibit smaller variance than the full Laplace approximation as they restrict the structure of the precision matrix. Consistently with the other experiments, we find the diagonal
Laplace approximation to place more mass in low probability areas of the posterior than the Kronecker factored approximation, resulting in higher variance on the regression problem. This leads to a need for greater regularisation of the diagonal approximation to obtain acceptable predictive performance, and underestimating the uncertainty.
F PREDICTION ACCURACY
This section shows the accuracy values obtained from the different predictions methods on the feedforward networks for MNIST and the wide residual network for CIFAR100. The results for MNIST are shown in Table 1 and the results for CIFAR in Table 2.
In all cases, neither MC Dropout nor the Laplace approximation significantly change the classification accuracy of the network in comparison to a deterministic forward pass.
G IMPLEMENTATION DETAILS FOR RESIDUAL NETWORKS
Our wide residual network has n=3 block repetitions and a width factor of k=8 on CIFAR100 with and without Dropout using hyperparameters taken from (Zagoruyko & Komodakis, 2016): the network parameters are trained on a cross-entropy loss using Nesterov momentum with an initial learning rate of 0.1 and momentum of 0.9 for 200 epochs with a minibatch size of 128. We decay the learning rate every 50 epochs by a factor of 0.2, which is slightly different to the schedule used in (Zagoruyko & Komodakis, 2016) (they decay after 60, 120 and 160 epochs). As the original authors, we use L2-regularisation with a factor of 5×10 −4.
We make one small modification to the architecture: instead of downsampling with 1×1 convolutions with stride 2, we use 2×2 convolutions. This is due to Theano not supporting the transformation of images into the patches extracted by a convolution for 1×1 convolutions with stride greater than 1, which we require for our curvature backpropagation through convolutions.
We apply a standard Laplace approximation to the batch normalisation parameters — a Kronecker factorisation is not needed, since the parameters are one-dimensional. When calculating the curvature factors, we use the moving averages for the per-layer means and standard deviations obtained after training, in order to maintain independence between the data points in a minibatch.
We need to make a further approximation to the ones discussed in Section 2.2 when backpropagating the curvature for residual networks. The residual blocks compute a function of the form res(x) = x + fφ(x), where fφ typically is a sequence of convolutions, batch normalisation and elementwise nonlinearities. This means that we would need to pass back two curvature matrices, one for each summand. However, this would double the number of backpropagated matrices for each residual connection, hence the computation time/memory requirements would grow exponentially in the number of residual blocks. Therefore, we simply add the curvature matrices after each residual connection. | 1. What is the main contribution of the paper regarding neural network posteriors?
2. What are the strengths of the proposed approach, particularly in its mathematical exposition and clarity?
3. What are the weaknesses or limitations of the method, including the need for regularization and the complexity of the approach?
4. How does the reviewer assess the significance of the experimental results presented in the paper?
5. Are there any suggestions for additional experiments or comparisons with other works in the field?
6. How does the reviewer evaluate the overall novelty and impact of the paper's contributions? | Review | Review
This paper uses recent progress in the understanding and approximation of curvature matrices in neural networks to revisit a venerable area- that of Laplace approximations to neural network posteriors. The Laplace method requires two stages - 1) obtaining a point estimate of the parameters followed by 2) estimation of the curvature. Since 1) is close to common practice it raises the appealing possibility of adding 2) after the fact, although the prior may be difficult to interpret in this case. A pitfall is that the method needs the point estimate to fall in a locally quadratic bowl or to add regularisation to make this true. The necessary amount of regularisation can be large as reported in section 5.4.
The paper is generally well written. In particular the mathematical exposition attains good clarity. Much of the mathematical treatment of the curvature was already discussed by Martens and Grosse and Botev et al in previous works. The paper is generally well referenced.
Given the complexity of the method, I think it would have helped to submit the code in anonymized form at this point.There are also some experiments not there that would improve the contribution. Figure 1 should include a comparison to Hamiltonian Monte Carlo and the full Laplace approximation (It is not sufficient to point to experiments in Hernandez-Lobato and Adams 2015 with a different model/prior). The size of model and data would not be prohibitive for either of these methods in this instance. All that figure 1 shows at the moment is that the proposed approximation has smaller predictive variance than the fully diagonal variant of the method.
It would be interesting (but perhaps not essential) to compare the Laplace approximation to other scalable methods from the literature such as that of Louizos and Welling 2016 which uses also used matrix normal distributions. It is good that the paper includes a modern architecture with a more challenging dataset. It is a shame the method does not work better in this instance but the authors should not be penalized for reporting this. I think a paper on a probabilistic method should at some point evaluate log likelihood in a case where the test distribution is the same as the training distribution. This complements experiments where there is dataset shift and we wish to show robustness. I would be very interested to know how useful the implied marginal likelihoods of the approximation where, as suggested for further work in the conclusion. |
ICLR | Title
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models
Abstract
A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition (ASR) system to output attacker-chosen text. To exploit ASR models in real-world, black-box settings, an adversary can leverage the transferability property, i.e. that an adversarial sample produced for a proxy ASR can also fool a different remote ASR. However recent work has shown that transferability against large ASR models is very difficult. In this work, we show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability. We successfully demonstrate this phenomenon by evaluating state-of-the-art self-supervised ASR models like Wav2Vec2, HuBERT, Data2Vec and WavLM. We show that with low-level additive noise achieving a 30dB Signal-Noise Ratio, we can achieve target transferability with up to 80% accuracy. Next, we 1) use an ablation study to show that Self-Supervised learning is the main cause of that phenomenon, and 2) we provide an explanation for this phenomenon. Through this we show that modern ASR architectures are uniquely vulnerable to adversarial security threats.
N/A
A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition (ASR) system to output attacker-chosen text. To exploit ASR models in real-world, black-box settings, an adversary can leverage the transferability property, i.e. that an adversarial sample produced for a proxy ASR can also fool a different remote ASR. However recent work has shown that transferability against large ASR models is very difficult. In this work, we show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability. We successfully demonstrate this phenomenon by evaluating state-of-the-art self-supervised ASR models like Wav2Vec2, HuBERT, Data2Vec and WavLM. We show that with low-level additive noise achieving a 30dB Signal-Noise Ratio, we can achieve target transferability with up to 80% accuracy. Next, we 1) use an ablation study to show that Self-Supervised learning is the main cause of that phenomenon, and 2) we provide an explanation for this phenomenon. Through this we show that modern ASR architectures are uniquely vulnerable to adversarial security threats.
1 INTRODUCTION
Adversarial audio algorithms are designed to force Automatic Speech Recognition (ASR) models to produce incorrect outputs. They do so by introducing small amounts of imperceptible, carefully crafted noise to benign audio samples that can force the ASR model to produce incorrect transcripts. Specifically, targeted adversarial attacks (Carlini & Wagner, 2018; Qin et al., 2019) are designed to force ASR models to output any target sentence of the attacker’s choice. However, these attacks have limited effectiveness as they make unreasonable assumptions (e.g., white-box access to the model weights), which are unlikely to be satisfied in real-world settings.
An attacker could hypothetically bypass this limitation by using the transferability property of adversarial samples: they generate adversarial samples for a white-box proxy model; then pass these to a different remote black-box model, as we illustrate in Figure 1a. Transferability has been successfully demonstrated in other machine learning domains, like computer vision (Papernot et al., 2016). This is a sample text in black. Yet for ASR, recent work has shown that transferability is close to non-existent between large models Abdullah et al. (2021b), even between identically trained models (i.e., same training hyper-parameters, even including the random initialization seed). These findings were demonstrated on older ASR architectures, specifically on LSTM-based DeepSpeech2 models trained with CTC loss. However, robustness properties sometimes vary considerably between different ASR architectures (Lu et al., 2021; Olivier & Raj, 2022), and it is worth studying adversarial transferability on more recent families of models.
In this work, we evaluate the robustness of modern transformer-based ASR architectures. We show that many state-of-the-art ASR models are in fact vulnerable to the transferability property. Specifically, our core finding can be formulated as follows:
Pretraining transformer-based ASR models with Self-Supervised Learning (SSL) makes them vulnerable to transferable adversarial attacks.
SSL is an increasingly popular learning paradigm in ASR (Figure 1b), used to boost model performance by leveraging large amounts of unlabeled data. We demonstrate that it hurdles robustness by making the following contributions:
• First, we show that most public SSL-pretrained ASR models are vulnerable to transferability. We generate 85 adversarial samples for the proxy HuBERT and Wav2Vec2 models (Section 3). We show that these samples are effective against a wide panel of public transformer-based ASRs. This includes ASRs trained on different data than our proxies.
• Second, we show that SSL-pretraining is the reason for this vulnerability to transferability. We do so using an ablation study on Wav2Vec2-type models.
• Third, we propose an explanation for this curious phenomenon. We argue that targeted ASR attacks need considerable feature overlap to be transferable; and that SSL objectives encourage such feature overlap between different models.
Our results show that SSL, a line of work gathering attention in the ASR community that has pushed the state-of-the-art on many benchmarks, is also a source of vulnerability. Formerly innocuous attacks with unreasonable assumptions are now effective against many modern models. As it is likely that SSL will be used to train ASR systems in production, our results pave the way for practical, targeted attacks in the real world. By no means do these results imply that this line of work should be aborted, but they emphasize the pressing need to focus on robustness alongside performance.
2 BACKGROUND
2.1 SSL PRETRAINING FOR ASR MODELS
We describe in this Section the principles of SSL-pretrained ASR models, whose robustness to attacks we evaluate in this work. These models usually follow the neural architecture of Wav2Vec2 (Baevski et al., 2020). Raw audio inputs are fed directly to a CNN. A Transformer encodes the CNN outputs into contextualized representations. A final feed-forward network projects these representations in a character output space. The model is fine-tuned with CTC loss (Graves et al., 2006).
A number of different models follow this architecture, including Wav2Vec2, HuBERT (Hsu et al., 2021), Data2Vec (Baevski et al., 2022), UniSpeech-SAT (Wang et al., 2021; Chen et al., 2021b) or WavLM (Chen et al., 2021a). These networks only have very minor differences in their architectures, to the point that standardized sizes are used for all of them. Base models have 12 transformer hidden layers and 90M parameters. Large models have 24 layers and 300M parameters. Finally, XLarge models have 48 layers for a total of 1B parameters.
While the networks are similar, the training pipelines of these models differ substantially. All models are pretrained on large amounts of unlabeled data, then fine-tuned for ASR on varying quantities of labeled data. The pretraining involves SSL objectives, such as Quantization and Contrastive Learning (Wav2Vec2), offline clustering and masked predictions (HuBERT), or masked prediction
of contextualized labels (Data2Vec). Unispeech combines SSL and CTC pretraining with multitask learning. WavLM adds denoising objectives and scales to even greater amounts of unlabeled data.
SSL pretraining is helpful in many regards: it makes the same network easy to fine-tune for multiple downstream tasks with little labeled data and has improved state-of-the-art results in ASR benchmarks, especially in low-resource settings. As we demonstrate, it is also a source of vulnerabilities.
2.2 ADVERSARIAL ATTACKS
Adversarial examples are inputs modified imperceptibly by an attacker to fool machine learning models (Szegedy et al., 2014; Goodfellow et al., 2014; Carlini & Wagner, 2016; Madry et al., 2018). While most works have focused on image classification, several created of adapted attacks for other tasks such as ASR (Cisse et al., 2017; Carlini & Wagner, 2018; Qin et al., 2019).
The attack we use is based on the Carlini&Wagner ASR attack (Carlini & Wagner, 2018), although slightly simplified. Given an input x, a target transcription yt, and an ASR model f trained with loss L, our attack finds an additive perturbation δ optimizing the following objective:
min δ
L(f(x+ δ), yt) + c ∥δ∥22 s.t. ∥δ∥∞ < ϵ (1)
which we optimize using L∞ Projected Gradient Descent. While the CW attack typically uses a large initial ϵ, then gradually reduces it as it finds successful perturbations, we fix a single value of ϵ and optimize for a fixed number of iterations. We find that this scheme, closer to the PGD algorithm Madry et al. (2018), greatly improves attack transferability. However we keep using the L2 regularization term c ∥δ∥22 introduced in the CW attack. We also find that applying regularization such as dropout during attack optimization greatly helps to generate transferable perturbations. This effect is analyzed more in detail in Appendix D.3. Throughout the rest of the paper, we run all attack optimization steps using the default dropout, layer drop, etc. that the proxy model used during training (typically a dropout of 0.1).
3 TRANSFERABLE ATTACK ON STATE-OF-THE-ART ASR MODELS
In our core experiment, we fool multiple state-of-the-art SSL-pretrained ASR models with targeted and transferred adversarial attacks. We generate a small set of targeted audio adversarial examples using fixed proxy models. We then transfer those same examples on a large number of models available in the HuggingFace Transformers library. Table 1 specifies how much unlabeled and labeled data these models were trained on. We provide the full experimental details in appendix A.
3.1 GENERATING ADVERSARIAL EXAMPLES ON PROXIES
We describe our procedure to generate adversarial examples. To maximize the transferability success rate of our perturbations we improve the base attack in Section 2.2 in several key ways:
• To limit attack overfitting on our proxy, we combine the losses of two proxy models: Wav2Vec2 and HuBERT (LARGE). Both models were pretrained on the entire LV60k dataset and finetuned on 960h of LibriSpeech. As these models have respectively a contrastive and predictive objective, they are a representative sample of SSL-pretrained ASR models. The sum of their losses is used as the optimization objective in Equation 1.
• We use 10000 optimization steps, which is considerable (for comparison Carlini & Wagner (2018) use 4000) and can also lead to the adversarial noise overfitting the proxy models. To mitigate this effect we use a third model, the Data2Vec BASE network trained on LibriSpeech, as a stopping criterion for the attack. At each attack iteration, we feed our adversarial example to Data2Vec, and keep track of the best-performing perturbation (in terms of WER). We return that best perturbation at the end of the attack. Because this procedure is computationally expensive, we only apply it to a subset A of 85 utterances of less than 7 seconds. We sample them randomly in the LibriSpeech testclean set. We select attack targets at random: we sample a completely disjoint subset B of
utterances in the LibriSpeech test-other set. To each utterance in A we assign as target the transcription of the sentence in B whose length is closest to its own. This ensures that a very long target isn’t assigned to a very short utterance or vice versa.
3.2 TRANSFERRING ADVERSARIAL EXAMPLES ON ASR MODELS
We evaluate all SSL-pretrained models mentioned in Section 2.1, along with several others for comparison: the massively multilingual speech recognizer or M-CTC (Lugosch et al., 2022) trained with pseudo-labeling, and models trained from scratch for ASR: the Speech-to-text model from Fairseq (Wang et al., 2020) and the CRDNN and Transformer from SpeechBrain (Ravanelli et al., 2021).
3.3 METRICS
We evaluate the performance of ASR models with the Word-Error-Rate (WER) between the model output and the correct outputs.
When evaluating the success of adversarial examples, we can also use the Word-Error-Rate. Between the prediction and the attack target yt, a low WER indicates a successful attack. We therefore define the word-level targeted attack success rate as
TASR = max(1−WER(f(x+ δ), yt), 0) (2)
It is also interesting to look at the results of the attack in terms of denial-of-service, i.e. the attack’s ability to stop the model from predicting the correct transcription y. Here a high WER indicates a successful attack. We define the word-level untargeted attack success rate as
UASR = min(WER(f(x+ δ), y), 1) (3)
We can also compute the attack success rate at the character level, i.e. using the Character-ErrorRate (CER) instead of the Word-Error-Rate. Character-level metrics are interesting when using weaker attacks that affect the model, but not enough to reduce the targeted WER significantly. We use them in our ablation study in section 4.
Finally, we control the amount of noise in our adversarial examples with the Signal-Noise Ratio (SNR), defined as
SNR(δ, x) = 10 log( ∥x∥22 ∥δ∥22 ) (4)
for an input x and a perturbation δ. When generating adversarial examples we adjust the L∞ bound ϵ (equation 1 to achieve a target SNR.
3.4 RESULTS
We report the results of our adversarial examples in Table 1 for ϵ = 0.015, corresponding to a Signal-Noise Ratio of 30dB on average. In Appendix D.1 we also report results for a larger ϵ value.
On 12 out of 16 models, we observe that the attack achieves total denial-of-service: the untargeted success rate is 100%. Moreover, on the first 6 models (proxies aside), the targeted attack success rate ranges between 50% and 81%: the target is more than half correctly predicted! These results are in flagrant contradiction with past works on DeepSpeech2-like models, where even the slightest change in training leads to a total absence of targeted transferability between proxy and private model. Our private models vary from the proxies in depth, number of parameters and even training methods, yet we observe important transferability. However, these 6 models have all been pretrained on LibriSpeech or Libri-Light with SSL pretraining, i.e. the same data distribution as our proxies.
The following five models were pretrained on different datasets. One was pretrained on a combination of Libri-Light, VoxPopuli and GigaSpeech; two on Libri-Light, CommonVoice, SwitchBoard and Fisher; and two on CommonVoice either multilingual or English-only. The transferability success rate on these five models ranges from 18% to 67%, which is significant. Even the CommonVoice models, whose training data has no intersection with Libri-Light, are partially affected.
Although our inputs and attack targets are in English, we apply them to a French-only CommonVoice Wav2vec2. This model, incapable of decoding clean LibriSpeech data, is also unaffected by our targeted perturbation. It therefore seems that, while multilingual models are not robust to our examples, a minimal performance on the original language is required to observe transferability.
The final 4 models for which the targeted transferability rate is null or close to null, are those that were not SSL-pretrained at all (including M-CTC which was pretrained with pseudo-labeling). These four models also partially resist the untargeted attack.
It emerges from these results that some recent ASR models, specifically those pretrained with SSL, can be vulnerable to transferred attacks. These results diverge significantly from previous works like (Abdullah et al., 2021b; 2022a) which showed no transferability between different models. Table 1 hints that SSL pretraining plays an important role in transferability, but does not prove it: to do so we would need to compare models of identical architecture and performance, pretrained and trained from scratch, both as proxy and target. This is what we do in the next section.s
4 IDENTIFYING THE FACTORS THAT ENABLE ATTACK TRANSFERABILITY
In this section, we conduct a thorough ablation study and establish rigorously that SSL pretraining makes ASR models vulnerable to transferred attacks. We also measure the influence of several other factors on transferability. This ablation study requires the generation of many sets of adversarial examples,, using varying models as proxy, which would be computationally difficult with the improved attack introduced in section 3.1. Since we do not seek optimal performance, throughout this section we run the base attack in Section 2.2 with 1000 optimization steps.
4.1 INFLUENCE OF SELF-SUPERVISED LEARNING
In this section, we compare Wav2Vec2 models with varying amounts of pretraining data: 60k hours, 960h, or none at all. We use each model both as a proxy to generate adversarial noise and as a private model for evaluation with all other proxies.
As Wav2Vec2 models fine-tuned from scratch are not publicly available, we train our own models with no pretraining, using the Wav2Vec2 fine-tuning configurations on 960h of labeled data available in Fairseq (Ott et al., 2019). These configurations are likely suboptimal and our models achieve test-clean WERs of 9.1% (Large) and 11.3% (Base), much higher than the pretrained+fine-tuned Wav2Vec2 models. This performance discrepancy could affect the fairness of our comparison. We therefore add to our experiments Wav2Vec2 Base models fine-tuned on 1h and 10h of labeled data only. These models achieve test-clean WERs of 24.5% and 11.1%. Therefore we can observe the influence of SSL pretraining by taking model architecture and performance out of the equation.
Our attacks are not as strong as in section 3.1, and only have limited effect on the targeted WER. Therefore we evaluate results at the character level, which offers much finer granularity. For reference, we observe that the CER between two random pairs of sentences in LibriSpeech is 80-85% on average. Therefore attack success rates higher than 20% (i.e CER < 80% with the target) indicate a partially successful attack. We report those results in Table 2. Results in italic correspond to cases where attacked model is the proxy or was fine-tuned from the same pretrained representation, and therefore do not correspond to a transferred attack.
These results show unambiguously that SSL pretraining plays a huge role in the transferability of adversarial attacks. Adversarial examples generated on the pretrained Wav2Vec2 models fine-tuned on 960h are partially successful on all pretrained models (success rate in the 25-46% range). They are however ineffective on the ASR models trained from scratch (4-8%). Similarly, models trained from scratch are bad proxies for pretrained models (2-3%) and even for each other (19-22%).
It follows that SSL pretraining is a necessary condition for transferable adversarial examples in both the proxy and the private model. We confirm it by plotting in Figure 3a the evolution of the target loss while generating one adversarial example. We display the loss for the proxy model (blue) and two private models. The loss of the pretrained private model (red) converges to a much lower value than the non-pretrained model (yellow).
SSL pretraining is however not a sufficient condition for attack transferability, and other factors play a role as well. For instance, the Base model fine-tuned on just 10h and 1h are ineffective proxies: so strong ASR models are likely better proxies than weaker ones.
4.2 INFLUENCE OF PRETRAINING DATA
As observed in Section 3 models that were (pre)trained on different data than the proxies can still be affected by transferred attacks. We analyse this effect in more details in this section. We focus on five Wav2Vec2-Large models. One is pretrained and fine-tuned on LibriSpeech. One is pre-
trained on LibriLight and fine-tuned on LibriSpeech. Two are pretrained on LV60k, CommonVoice, SwitchBoard and Fisher, and fine-tuned respectively on LibriSpeech and SwitchBoard. Finally one is pretrained and finetuned on CommonVoice (English-only). As in the previous section, we evaluate every combination of proxy and target models.
We report the results in Table 3. We observe that most pairs of proxy and private models lead to important partial transferability. The major exception is the CommonVoice-only model, which does not succeed as a proxy for other models (0-8% success rate). In contrast, it is vulnerable to attacks transferred from other models, including those that do not have CommonVoice in their training data. We also note that models pretrained on Lbri-Light or more (60+khs) are better proxies, and more vulnerable ot attacks, than the LibriSpeech-only and CommonVoice-only model. In other words the vulnerability that we point out is worsened rather than mitigated by increasing amounts of available data.
4.3 MODEL SIZE AND TRAINING HYPERPARAMETERS
We now extend our ablation study to models pretrained with different SSL paradigms. We report the results in Table 4. We observe that adversarial examples also transfer between models trained with different paradigms. Moreover, At equal pretraining data all models are not equal proxies, and the HuBERT Large model (pretrained on 60kh) is the best proxy by a large margin.
5 A HYPOTHESIS FOR THE VULNERABILITY OF SSL-PRETRAINED MODELS
We have established a link between adversarial transferability and the SSL pretaining of ASR models. In this section we propose a hypothesis explaining that link. We first show in Section 5.1, with empirical justification, that attacks with a very precise target are much harder to transfer everything
else being equal, explaining why targeted ASR attacks are usually nontransferable. Then in Section 5.2 we suggest ways in which SSL alleviates these difficulties, thus recovering some transferability.
5.1 AT EQUAL WHITE-BOX SUCCESS, VERY TARGETED ATTACKS ARE HARDER TO TRANSFER
Targeted attacks on CIFAR10 force the model to predict one out of 10 different labels. Targeted attacks on ASR models force the model to transcribe one of all the possible transcriptions: With sequences of just five English words the number of possibilities is equal to 1700005 ∼ 1026. We can call such an attack ”very targeted”, by contrast to more ”mildly targeted” attacks on CIFAR10.
We hypothesize that the target precision, or ”how targeted” the attack is, negatively affects its transferability success rate, explaining why targeted ASR attacks do not transfer easily. To demonstrate it empirically, we can imagine an experiment where an attacker tries to run a very targeted attack on CIFAR10. We hypothesize that in such a case, the transferred attack success rate would drop even if the white box attack success rate remains high. Inversely, if we designed a ”mildly targeted” attack on ASR models, we would expect it to achieve a non-trivial transferability success rate. We designed experiments for both cases, which we summarize below. Complete experimental details and results are provided in Appendix B.
5.1.1 VERY TARGETED ATTACKS ON CIFAR10
We run an attack on a ResNet CIFAR10 model. We do not just enforce the model’s most probable output (top1 prediction) but the first k most probable outputs (topk prediction). For example with k = 3, given an image of an airplane, the attack objective could be to modify the image such that the most probable model output is ”car”, the second most probable is ”bird” and the third is ”frog”. Our attack algorithm sets a ”target distribution” of classes, then minimizes the KL divergence of the model’s probabilistic outputs and the target, using Projected Gradient Descent. The success rate is evaluated by matching the top k predictions and the top k targets.
We compute the L∞ attack success rate (ϵ = 0.03) for both white-box and transferred attacks as a function of the ”target precision” k. For k = 1, we measure a transferability success rate above 30%. However, as k increases, the transferability success rate drops close to 10%, which is the success threshold that a random model would achieve. In other words, the transferability becomes null as k increases. Meanwhile, the white box attack success rate remains above 95%. Therefore very targeted attacks on images do not transfer.
5.1.2 MILDLY TARGETED ATTACKS ON ASR
We train five small Conformer models on LibriSpeech. On each of them we generate targeted adversarial examples. The target objective is simply to prepend the word ”But” to the original transcription. This makes for a much less targeted attack as is traditionally done with ASR. The attack success rate is evaluated simply by checking the presence of the word ”But” at the beginning of the prediction. We restrict evaluation to inputs whose transcription does not start with that word.
For each model, we generate 100 adversarial examples and evaluate them on all 4 other models. We thus obtain 20 different transferability success rates. The average of these scores is 18% with a standard deviation of 4.7%. Therefore mildly targeted attacks on ASR transfer substantially better than regular, very targeted attacks. Equivalent experiments with very targeted ASR attacks are reported in Abdullah et al. (2021b): the word-level transferability success rate is 0%.
5.2 VERY TARGETED TRANSFERABILITY REQUIRES IMPORTANT FEATURE OVERLAP
Why would very targeted attacks transfer less? As Ilyas et al. (2019) show, statistically meaningful patterns in the training data may be ”robust” (i.e. resilient to small perturbations) or non-robust. By leveraging non-robust features attackers can generate adversarial perturbations - and as these features can be learned by any models, these perturbations will transfer. The underlying assumption behind this framework is that all models learn the same features. In practice, two seperate models do not learn identical features due to randomness in training. But if they are ”close enough”, i.e. if the feature overlap between both models is important, then transferability will be observed.
It therefore makes perfect sense that more targeted attacks would transfer less. The more precise and difficult the attack objective is, the more features the attacker will depend on to achieve it. This increases the amount of feature overlap needed between proxy and private model for the attack to transfer. In the case of targeted ASR attacks, the required overlap is considerable. We hypothesize that SSL pretraining increases the feature overlap between ASR models. As empirically verifying it would pose important difficulties, we propose a high-level justification of that hypothesis.
ASR training aims at learning a representation that enables speech transcription. A subset of all features is sufficient to achieve this objective: for instance, there are lots of redundancies between low-frequency and high-frequency features, and a human listener can easily transcribe speech where most frequencies have been filtered out. The set of features learned by ASR models is therefore underspecified: two models even very similar identically may learn representations with little overlap.
Self-Supervised Learning on the other hand does not only learn useful features for transcription but features needed for predicting the input itself : parts of the input are masked, then they (or their quantized or clusterized form) are predicted using context. Arguably this much more ambitious objective requires the network to learn as many features as possible. In fact, the goal of such pretraining is to learn useful representations not just for ASR but any downstream task - i.e. ”exhaustive” representations. Intuitively, different models trained in that way would share many more features than ASR models trained from scratch - leading to more transferable adversarial examples.
6 RELATED WORK
The transferability of adversarial attacks has been known for many years in Image Classification (Papernot et al., 2016). On ASR it has been limited to simple attack objectives, like preventing WakeWord detection in Alexa (Li et al., 2019) or signal processing-based attacks (Abdullah et al., 2021a; 2022b). When it comes to optimization-based attacks on large ASR models, transferability claims are usually limited and focus on untargeted attacks (Wu et al., 2022). In very specific cases there have been limited claims of targeted, transferable attacks, such as Yuan et al. (2018); however, this work does not focus on imperceptible attacks with small amounts of noise, but rather attacks embedded in music. When it comes to standard targeted optimization attacks, Abdullah et al. (2021b) have shown that they display no transferability on DeepSpeech2 models, even when the proxy and the attacked model are trained with identical hyperparameters apart from the initial random seed.
Past ASR adversarial attacks usually focus on a handful of neural architectures, typically DeepSpeech2 (et al., 2016), sometimes Listen Attend and Spell (Chan et al., 2016). Only recently have attacks been extended to multiple recent architectures for a fair comparison between models (Lu et al., 2021; Olivier & Raj, 2022; Wu et al., 2022). Most related to this work is Wu et al. (2022), which focuses on the vulnerability of SSL speech models. They however focus on attacking the base pretrained model with untargeted noise that remains effective on downstream tasks. We study targeted attacks, with a much deeper focus on transferability between different models. Olivier & Raj (2022) have hinted that Wav2Vec2 models are vulnerable to transferred attacks, but only report limited results on two models and do not investigate the cause of that phenomenon. We attribute it to SSL pretraining and back our claims empirically.
Abdullah et al. (2022a) have identified factors that hinder transferability for ASR attacks, such as MFCC features, Recurrent Neural Networks, and large output sizes. Since Wav2Vec2 is a CNNTransformer model with character outputs: this gives it a better prior than DeepSpeech2 to achieve transferable adversarial attacks. However, according to that paper, this should be far from sufficient to obtain transferable attacks: our results differ in the case of SSL-pretrained models.
7 CONCLUSION
We have shown that ASR targeted attacks are transferable between SSL-pretrained ASR models. Direct access to their weights is no longer required to fool models to predict outputs of the attacker’s choice - and to an extent, knowledge of its training data is not required either. With that in mind, and given the existence of over-the-air attack algorithms, we expect attacks against ASR models to become a practical, realistic threat as soon as Wav2Vec2-type models are deployed in production.
In that context, it is paramount to develop adversarial defense mechanisms for ASR models. Fortunately, such defenses already exist, but they come at the cost of a tradeoff in model performance. We illustrate it in appendix E. Further research should be carried out into mitigating that tradeoff and adapting to ASR the most effective defenses in image classification, such as adversarial training.
A EXPERIMENTAL DETAILS FOR LIBRISPEECH EXPERIMENTS
A.1 FRAMEWORKS
We compute adversarial examples using the robust speech framework (Olivier & Raj, 2022). This library uses Speechbrain (Ravanelli et al., 2021) to load and train ASR models and offers implementations of various adversarial attack algorithms. Models and attacks are implemented using PyTorch (Paszke et al., 2019).
We use robust speech for evaluation on SpeechBrain-supported models. In section 3 we export a HuggingFace Dataset (Lhoest et al., 2021), then evaluate models via the HuggingFace Transformers (et al., 2020) library. Finally, we use Fairseq (Ott et al., 2019) for training models from scratch
All of our robust speech and Fairseq configurations are released alongside this article.
A.2 ATTACK HYPERPARAMETERS
We exploit the Carlini&Wagner attack (see section 2.2) implemented in robust speech, with the following hyperparameters:
• initial ϵ: 0.015 (and 0.04 in appendix D.1)
• learning rate: 0005
• number of decreasing ϵ values: 1
• Regularization constant c: 10
• optimizer: SGD
• attack iterations: 10000 in section 3.1, 1000 in section 4
A.3 DATASET AND TARGETS
Our adversarial dataset in section 3.1 consists of 85 sentences from the LibriSpeech test-clean set. To extract these sentences we take the first 200 sentences in the manifest, then keep only those shorter than 7 seconds. In section 4, we take the first 100 sentences and filter those shorter than 14 seconds.
As attack targets, we use actual LibriSpeech sentences sampled from the test-other set. Our candidate targets are:
• Let me see how can i begin
• Now go I can’t keep my eyes open
• So you are not a grave digger then
• He had hardly the strength to stammer
• What can this mean she said to herself
• Not years for she’s only five and twenty
• What does not a man undergo for the sake of a cure
• It is easy enough with the child you will carry her out
• Poor little man said the lady you miss your mother don’t you
• At last the little lieutenant could bear the anxiety no longer
• Take the meat of one large crab scraping out all of the fat from the shell
• Tis a strange change and I am very sorry for it but I’ll swear I know not how to help it
• The bourgeois did not care much about being buried in the Vaugirard it hinted at poverty pere Lachaise if you please
For each sentence we attack, we assign the candidate target with the closest length to the sentence’s original target.
A.4 MODELS
A.4.1 TRAINING WAV2VEC2 MODELS FROM SCRATCH
We use Fairseq to train Base and Large Wav2Vec2 models from scratch. Unfortunately, no configuration or pretrained weights have been released for that purpose, and we resort to using Wav2Vec2 fine-tuning configurations while simply skipping the pretraining step. Despite our attempts to tune training hyperparameters, we do not match the expected performance of a Wav2Vec2 model trained from scratch: (Baevski et al., 2020) report a WER of 3.0% for a large model, while we only get 9.1%.
A.4.2 GENERATING ADVERSARIAL EXAMPLES
Wav2Vec2, HuBERT and Data2Vec models are all supported directly in robust speech and are therefore those we use for generating adversarial examples. We use the HuggingFace backend of Speechbrain for most pretrained models, and its Fairseq backend for a few (Wav2Vec2-Base models finetuned on 10h and 1h, and models trained from scratch). In both cases, the model’s original tokenizer cannot be loaded in SpeechBrain directly. Therefore, we fine-tune the final projection layer of each model on 1h of LibriSpeech train-clean data.
The Wav2Vec2 model pretrained and fine-tuned on CommonVoice is a SpeechBrain original model. Similarly, we fine-tune it on 1h of LibriSpeech data as a shift from the CommonVoice output space to the LibriSpeech one. As a result, all our models share the same character output space.
A.4.3 EVALUATING PRETRAINED MODELS
In section 3, we directly evaluate models from HuggingFace Transformers and SpeechBrain on our adversarial dataset, without modification.
B EXPERIMENTAL DETAILS AND RESULTS FOR SMALL-SCALE EXPERIMENTS
This section describes the experimental details used in section 5.
B.1 CIFAR10 EXPERIMENTS
We use a pretrained ResNet18 as proxy, and a pretrained ResNet50 as private model.
Our ”very targeted attack” PGDk consists in applying the following steps for each input:
• target selection. We sample uniformly an ordered subset of k classes out of 10 (E.g. with k = 3: (2, 5, 6)). We also sample a point uniformly on the unit k-simplex {x1, ..., xk ∈ [0, 1]n/ ∑ i Xi = 1}, by sampling from an exponential distribution and
normalizing (Onn & Weissman, 2011) (e.g. (0.17, 0.55, 0.28)). We combine the two to obtain a 10-dimensional vectors with zero probability on all but the selected k classes (y = (0, 0.17, 0, 0, 0.55, 0.28, 0, 0, 0, 0)). This is our target.
• During the attack, we use Projected Gradient Descent (Madry et al., 2018) to minimize the KL divergence KL(f(x), y) between the softmax output and the target, within L2 radius ϵ = 0.5. We use learning rate 0.1 for k ∗ 1000 attack steps.
• We measure attack success rate by measuring the top-k match between f(x) and y:
acc = 1
k k∑ i=1 1[argsort(f(x))i = argsort(y)i
with argsort(y) returning the indices of the sorted elements of y in decreasing order. For instance f(x) = (0.1, 0.05, 0.05, 0.05, 0.35, 0.2, 0.05, 0.05, 0.05, 0.05) would get an accuracy of 0.666, as the top 2 classes match with y but not the third.
We evaluate attacks on 256 random images from the CIFAR10 dataset. For each value of k between 1 and 10 we repeat the experiment 3 times and average the attack success rates. In figure 2 we plot
the L∞ attack success rate (ϵ = 0.03) for both white-box and transferred attacks as a function of the ”target precision” k.
B.2 MILDLY TARGETED ASR ATTACKS
We train 5 identical conformer encoder models with 8 encoder layers, 4 attention heads, and hidden dimension 144. We train them with CTC loss for 30 epochs on the LibriSpeech train-clean-100 set, with different random seeds.
We run a L2-PGD attack with SNR bound 30dB, in which we minimize the cross-entropy loss between the utterance and its transcription prepended with the word ”But”. The utterances we attack are the first 100 sentences in the LibriSpeech test-clean set, to which we remove 7 sentences already starting with the word ”But”. We generate adversarial examples using each of the 5 models as proxy, and evaluate these examples on all 5 models. We report the full results in Table 5.
C FULL RESULTS TABLE FOR CROSS-MODEL ATTACKS
Table 6 completes the ablation study in Section 4 by evaluating all pairwise Proxy-Model combinations in our pool of Wav2Vec2-type models.
D INFLUENCE OF HYPERPARAMETERS ON ATTACK RESULTS
D.1 ATTACK RADIUS
In Table 7 we extend the results of Table 1 by comparing attack results for two different attack radii. These radii are ϵ = 0.015 and ϵ = 0.04, corresponding respectively to Signal-Noise Ratios of
30dB and 22dB respectively. The former is identical to Table 7; the latter is substantially larger, and corresponds to a more easily perceptible noise.
Looking at the white-box attack results on the proxy models the difference is drastic: with larger noise the targeted success rate jumps from 88% to 98%. The transferred attack results on SSLpretrained models also increase overall, with success increases ranging from 0% (Wav2Vec2-Large) to 20% (Data2Vec-Large) with a median increase of 10%. Crucially however, the targeted success does not increase at all and even decreases for ASR models trained from scratch. This confirms that there is a structural difference between the robustness of ASR models with and without SSL, that cannot be bridged simply by increasing the attack strength.
D.2 LANGUAGE MODELS
In section 3 we report the results of our adversarial dataset on multiple Wav2Vec2-type models, enhanced with an N-gram language model whenever available. In Table 8 we evaluate the influence of that language model on attack results.
We observe that the attack success rate systematically increases by 8 to 17% when adding a language model to the ASR model. This is understandable considering that our targets are sound English sentences: if a model tends to transcribe that target with mistakes, the language model can bridge that
gap. To put it differently, the more prone an ASR model is to output sentences in a given distribution, the more vulnerable it is to attacks with targets sampled from that distribution. Language models are therefore more of a liability than a defense against attacks, and most likely so would be many tricks applied to an ASR model in order to improve its general performance.
D.3 EFFECT OF MODEL REGULARIZATION ON TRANSFERABILITY
As mentioned in Section 2.2 we use regularization tricks like dropout in all proxy models when optimizing the adversarial perturbation. In Figure 3b we plot the loss on proxy and private models without that regularization, for comparison with Figure 3a. We observe that the loss degrades significantly on private models without regularization.
On the other hand, the loss on the proxy converges much faster in Figure 3b: removing model regularization makes for better, faster white-box attacks, at the cost of all transferability. To the extent of our knowledge, past work like Carlini & Wagner (2018) have not used regularization for generation, explaining why they report better white-box attacks than we do in terms of WER and SNR. However, as we have established above, applying regularization against standard ASR models does not lead to transferable adversarial examples: for that SSL pretraining is also required.
E DEFENDING AGAINST ADVERSARIAL EXAMPLES
Although we have shown that adversarial attacks can represent an important threat for private, SSLbased ASR models, it is possible to defend against them. Randomized smoothing Cohen et al. (2019) is a popular adversarial defense that has been applied to ASR in the past Olivier & Raj (2021) and comes with some robustness guarantees. It consists in applying to the inputs, before feeding them to the model, amounts of random gaussian noise that are significantly larger than potential adversarial perturbations in L2 norm. For reference we try applying it on some of our models.
We follow (Olivier & Raj, 2021) and enhance randomized smoothing with a-priori SNR estimation and ROVER voting (with 8 outputs) to boost performance. We use gaussian deviation σ = 0.02. For evaluation, we simply check the effect of our adversarial examples generated in section 3.1 on the smoothed model. A rigorous evaluation would require us to design adaptive attacks Athalye et al. (2018); Tramer et al. (2020); since this paper does not focus on claiming robustness to attacks, we restrict ourselves to a simpler setting.
We report our results in Table 9 for the Wav2Vec2-Base, Wav2Vec2-Large and Data2Vec-Large models, pretrained and fine-tuned on 960h of LibriSpeech training data. We observe that randomized smoothing is sufficient to block the targeted attack completely (0% success rate) and recover most of the original transcription (the untargeted success rate drops to 14-34% depending on the model). However, due to the addition of gaussian noise on all inputs the defense takes a toll on the performance on clean data: the WER jumps by 4-10%. The standard deviation σ controls this tradeoff between robustness and performance; we chose the value of σ that minimizes the untargeted success rate.
Unsurprisingly, randomized smoothing is a promising protection against transferred attacks, but it does leave room for improvement. These results illustrate the need for additional research on adversarial defenses. | 1. What is the focus of the paper regarding transferability property and ASR models?
2. What are the strengths and weaknesses of the proposed approach, particularly in generating adversarial examples?
3. Do you have any concerns or questions regarding the experimental setup and results, such as the selection of samples and the use of CER instead of WER?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigate the transferability property of adversarial attacks on ASR models. They evaluate the robustness of modern transformer-based ASR architectures such as Wav2Vec-2.0 etc. With a series of experiments on a set of ASR models by using a set of 85 adversarial samples, they show that many state-of-the-art ASR models are in fact vulnerable to the transferability property. Additionally, they also claim to show that SSL-pretraining is the reason for this vulnerability to transferability.
Strengths And Weaknesses
The approach for generating adversarial examples is interesting but it is not explained very well in section 3.1 as the details are missing such as: Were only 85 samples used during fine-tuning the models? Why 85 and how were these selected? It is not clear how the third model (Data2Vec BASE) was used as stopping criteria? Results are presented in section 3.4, but before that I have not seen how the adversarial examples are generated.
In section 4, why character error rate (CER) was used instead of WER? CER might not be indicating the significant impact on WER as these could have been fixed by LM duding decoding (if there was one added). I guess CER was not used in the previous section due to the same reason. So it would be good to see these results in terms of WER. This section in particular was difficult to understand. There should be some more details added for clarity. For example the following sentence is not at all clear: "we observe that the Character Error Rate between two random sentences is about 80-85% on average. Therefore attack success rates higher than 20% indicate a partially successful attack”. How is CER computed between two samples? Is it not between the reference and hypothesis? How 20% is selected as indicator for successful attack?
Clarity, Quality, Novelty And Reproducibility
Study is very interesting but not novel. Many experiemnts are not clearly explained, and reproducibility of these experiments might not be easy without all details. |
ICLR | Title
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models
Abstract
A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition (ASR) system to output attacker-chosen text. To exploit ASR models in real-world, black-box settings, an adversary can leverage the transferability property, i.e. that an adversarial sample produced for a proxy ASR can also fool a different remote ASR. However recent work has shown that transferability against large ASR models is very difficult. In this work, we show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability. We successfully demonstrate this phenomenon by evaluating state-of-the-art self-supervised ASR models like Wav2Vec2, HuBERT, Data2Vec and WavLM. We show that with low-level additive noise achieving a 30dB Signal-Noise Ratio, we can achieve target transferability with up to 80% accuracy. Next, we 1) use an ablation study to show that Self-Supervised learning is the main cause of that phenomenon, and 2) we provide an explanation for this phenomenon. Through this we show that modern ASR architectures are uniquely vulnerable to adversarial security threats.
N/A
A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition (ASR) system to output attacker-chosen text. To exploit ASR models in real-world, black-box settings, an adversary can leverage the transferability property, i.e. that an adversarial sample produced for a proxy ASR can also fool a different remote ASR. However recent work has shown that transferability against large ASR models is very difficult. In this work, we show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability. We successfully demonstrate this phenomenon by evaluating state-of-the-art self-supervised ASR models like Wav2Vec2, HuBERT, Data2Vec and WavLM. We show that with low-level additive noise achieving a 30dB Signal-Noise Ratio, we can achieve target transferability with up to 80% accuracy. Next, we 1) use an ablation study to show that Self-Supervised learning is the main cause of that phenomenon, and 2) we provide an explanation for this phenomenon. Through this we show that modern ASR architectures are uniquely vulnerable to adversarial security threats.
1 INTRODUCTION
Adversarial audio algorithms are designed to force Automatic Speech Recognition (ASR) models to produce incorrect outputs. They do so by introducing small amounts of imperceptible, carefully crafted noise to benign audio samples that can force the ASR model to produce incorrect transcripts. Specifically, targeted adversarial attacks (Carlini & Wagner, 2018; Qin et al., 2019) are designed to force ASR models to output any target sentence of the attacker’s choice. However, these attacks have limited effectiveness as they make unreasonable assumptions (e.g., white-box access to the model weights), which are unlikely to be satisfied in real-world settings.
An attacker could hypothetically bypass this limitation by using the transferability property of adversarial samples: they generate adversarial samples for a white-box proxy model; then pass these to a different remote black-box model, as we illustrate in Figure 1a. Transferability has been successfully demonstrated in other machine learning domains, like computer vision (Papernot et al., 2016). This is a sample text in black. Yet for ASR, recent work has shown that transferability is close to non-existent between large models Abdullah et al. (2021b), even between identically trained models (i.e., same training hyper-parameters, even including the random initialization seed). These findings were demonstrated on older ASR architectures, specifically on LSTM-based DeepSpeech2 models trained with CTC loss. However, robustness properties sometimes vary considerably between different ASR architectures (Lu et al., 2021; Olivier & Raj, 2022), and it is worth studying adversarial transferability on more recent families of models.
In this work, we evaluate the robustness of modern transformer-based ASR architectures. We show that many state-of-the-art ASR models are in fact vulnerable to the transferability property. Specifically, our core finding can be formulated as follows:
Pretraining transformer-based ASR models with Self-Supervised Learning (SSL) makes them vulnerable to transferable adversarial attacks.
SSL is an increasingly popular learning paradigm in ASR (Figure 1b), used to boost model performance by leveraging large amounts of unlabeled data. We demonstrate that it hurdles robustness by making the following contributions:
• First, we show that most public SSL-pretrained ASR models are vulnerable to transferability. We generate 85 adversarial samples for the proxy HuBERT and Wav2Vec2 models (Section 3). We show that these samples are effective against a wide panel of public transformer-based ASRs. This includes ASRs trained on different data than our proxies.
• Second, we show that SSL-pretraining is the reason for this vulnerability to transferability. We do so using an ablation study on Wav2Vec2-type models.
• Third, we propose an explanation for this curious phenomenon. We argue that targeted ASR attacks need considerable feature overlap to be transferable; and that SSL objectives encourage such feature overlap between different models.
Our results show that SSL, a line of work gathering attention in the ASR community that has pushed the state-of-the-art on many benchmarks, is also a source of vulnerability. Formerly innocuous attacks with unreasonable assumptions are now effective against many modern models. As it is likely that SSL will be used to train ASR systems in production, our results pave the way for practical, targeted attacks in the real world. By no means do these results imply that this line of work should be aborted, but they emphasize the pressing need to focus on robustness alongside performance.
2 BACKGROUND
2.1 SSL PRETRAINING FOR ASR MODELS
We describe in this Section the principles of SSL-pretrained ASR models, whose robustness to attacks we evaluate in this work. These models usually follow the neural architecture of Wav2Vec2 (Baevski et al., 2020). Raw audio inputs are fed directly to a CNN. A Transformer encodes the CNN outputs into contextualized representations. A final feed-forward network projects these representations in a character output space. The model is fine-tuned with CTC loss (Graves et al., 2006).
A number of different models follow this architecture, including Wav2Vec2, HuBERT (Hsu et al., 2021), Data2Vec (Baevski et al., 2022), UniSpeech-SAT (Wang et al., 2021; Chen et al., 2021b) or WavLM (Chen et al., 2021a). These networks only have very minor differences in their architectures, to the point that standardized sizes are used for all of them. Base models have 12 transformer hidden layers and 90M parameters. Large models have 24 layers and 300M parameters. Finally, XLarge models have 48 layers for a total of 1B parameters.
While the networks are similar, the training pipelines of these models differ substantially. All models are pretrained on large amounts of unlabeled data, then fine-tuned for ASR on varying quantities of labeled data. The pretraining involves SSL objectives, such as Quantization and Contrastive Learning (Wav2Vec2), offline clustering and masked predictions (HuBERT), or masked prediction
of contextualized labels (Data2Vec). Unispeech combines SSL and CTC pretraining with multitask learning. WavLM adds denoising objectives and scales to even greater amounts of unlabeled data.
SSL pretraining is helpful in many regards: it makes the same network easy to fine-tune for multiple downstream tasks with little labeled data and has improved state-of-the-art results in ASR benchmarks, especially in low-resource settings. As we demonstrate, it is also a source of vulnerabilities.
2.2 ADVERSARIAL ATTACKS
Adversarial examples are inputs modified imperceptibly by an attacker to fool machine learning models (Szegedy et al., 2014; Goodfellow et al., 2014; Carlini & Wagner, 2016; Madry et al., 2018). While most works have focused on image classification, several created of adapted attacks for other tasks such as ASR (Cisse et al., 2017; Carlini & Wagner, 2018; Qin et al., 2019).
The attack we use is based on the Carlini&Wagner ASR attack (Carlini & Wagner, 2018), although slightly simplified. Given an input x, a target transcription yt, and an ASR model f trained with loss L, our attack finds an additive perturbation δ optimizing the following objective:
min δ
L(f(x+ δ), yt) + c ∥δ∥22 s.t. ∥δ∥∞ < ϵ (1)
which we optimize using L∞ Projected Gradient Descent. While the CW attack typically uses a large initial ϵ, then gradually reduces it as it finds successful perturbations, we fix a single value of ϵ and optimize for a fixed number of iterations. We find that this scheme, closer to the PGD algorithm Madry et al. (2018), greatly improves attack transferability. However we keep using the L2 regularization term c ∥δ∥22 introduced in the CW attack. We also find that applying regularization such as dropout during attack optimization greatly helps to generate transferable perturbations. This effect is analyzed more in detail in Appendix D.3. Throughout the rest of the paper, we run all attack optimization steps using the default dropout, layer drop, etc. that the proxy model used during training (typically a dropout of 0.1).
3 TRANSFERABLE ATTACK ON STATE-OF-THE-ART ASR MODELS
In our core experiment, we fool multiple state-of-the-art SSL-pretrained ASR models with targeted and transferred adversarial attacks. We generate a small set of targeted audio adversarial examples using fixed proxy models. We then transfer those same examples on a large number of models available in the HuggingFace Transformers library. Table 1 specifies how much unlabeled and labeled data these models were trained on. We provide the full experimental details in appendix A.
3.1 GENERATING ADVERSARIAL EXAMPLES ON PROXIES
We describe our procedure to generate adversarial examples. To maximize the transferability success rate of our perturbations we improve the base attack in Section 2.2 in several key ways:
• To limit attack overfitting on our proxy, we combine the losses of two proxy models: Wav2Vec2 and HuBERT (LARGE). Both models were pretrained on the entire LV60k dataset and finetuned on 960h of LibriSpeech. As these models have respectively a contrastive and predictive objective, they are a representative sample of SSL-pretrained ASR models. The sum of their losses is used as the optimization objective in Equation 1.
• We use 10000 optimization steps, which is considerable (for comparison Carlini & Wagner (2018) use 4000) and can also lead to the adversarial noise overfitting the proxy models. To mitigate this effect we use a third model, the Data2Vec BASE network trained on LibriSpeech, as a stopping criterion for the attack. At each attack iteration, we feed our adversarial example to Data2Vec, and keep track of the best-performing perturbation (in terms of WER). We return that best perturbation at the end of the attack. Because this procedure is computationally expensive, we only apply it to a subset A of 85 utterances of less than 7 seconds. We sample them randomly in the LibriSpeech testclean set. We select attack targets at random: we sample a completely disjoint subset B of
utterances in the LibriSpeech test-other set. To each utterance in A we assign as target the transcription of the sentence in B whose length is closest to its own. This ensures that a very long target isn’t assigned to a very short utterance or vice versa.
3.2 TRANSFERRING ADVERSARIAL EXAMPLES ON ASR MODELS
We evaluate all SSL-pretrained models mentioned in Section 2.1, along with several others for comparison: the massively multilingual speech recognizer or M-CTC (Lugosch et al., 2022) trained with pseudo-labeling, and models trained from scratch for ASR: the Speech-to-text model from Fairseq (Wang et al., 2020) and the CRDNN and Transformer from SpeechBrain (Ravanelli et al., 2021).
3.3 METRICS
We evaluate the performance of ASR models with the Word-Error-Rate (WER) between the model output and the correct outputs.
When evaluating the success of adversarial examples, we can also use the Word-Error-Rate. Between the prediction and the attack target yt, a low WER indicates a successful attack. We therefore define the word-level targeted attack success rate as
TASR = max(1−WER(f(x+ δ), yt), 0) (2)
It is also interesting to look at the results of the attack in terms of denial-of-service, i.e. the attack’s ability to stop the model from predicting the correct transcription y. Here a high WER indicates a successful attack. We define the word-level untargeted attack success rate as
UASR = min(WER(f(x+ δ), y), 1) (3)
We can also compute the attack success rate at the character level, i.e. using the Character-ErrorRate (CER) instead of the Word-Error-Rate. Character-level metrics are interesting when using weaker attacks that affect the model, but not enough to reduce the targeted WER significantly. We use them in our ablation study in section 4.
Finally, we control the amount of noise in our adversarial examples with the Signal-Noise Ratio (SNR), defined as
SNR(δ, x) = 10 log( ∥x∥22 ∥δ∥22 ) (4)
for an input x and a perturbation δ. When generating adversarial examples we adjust the L∞ bound ϵ (equation 1 to achieve a target SNR.
3.4 RESULTS
We report the results of our adversarial examples in Table 1 for ϵ = 0.015, corresponding to a Signal-Noise Ratio of 30dB on average. In Appendix D.1 we also report results for a larger ϵ value.
On 12 out of 16 models, we observe that the attack achieves total denial-of-service: the untargeted success rate is 100%. Moreover, on the first 6 models (proxies aside), the targeted attack success rate ranges between 50% and 81%: the target is more than half correctly predicted! These results are in flagrant contradiction with past works on DeepSpeech2-like models, where even the slightest change in training leads to a total absence of targeted transferability between proxy and private model. Our private models vary from the proxies in depth, number of parameters and even training methods, yet we observe important transferability. However, these 6 models have all been pretrained on LibriSpeech or Libri-Light with SSL pretraining, i.e. the same data distribution as our proxies.
The following five models were pretrained on different datasets. One was pretrained on a combination of Libri-Light, VoxPopuli and GigaSpeech; two on Libri-Light, CommonVoice, SwitchBoard and Fisher; and two on CommonVoice either multilingual or English-only. The transferability success rate on these five models ranges from 18% to 67%, which is significant. Even the CommonVoice models, whose training data has no intersection with Libri-Light, are partially affected.
Although our inputs and attack targets are in English, we apply them to a French-only CommonVoice Wav2vec2. This model, incapable of decoding clean LibriSpeech data, is also unaffected by our targeted perturbation. It therefore seems that, while multilingual models are not robust to our examples, a minimal performance on the original language is required to observe transferability.
The final 4 models for which the targeted transferability rate is null or close to null, are those that were not SSL-pretrained at all (including M-CTC which was pretrained with pseudo-labeling). These four models also partially resist the untargeted attack.
It emerges from these results that some recent ASR models, specifically those pretrained with SSL, can be vulnerable to transferred attacks. These results diverge significantly from previous works like (Abdullah et al., 2021b; 2022a) which showed no transferability between different models. Table 1 hints that SSL pretraining plays an important role in transferability, but does not prove it: to do so we would need to compare models of identical architecture and performance, pretrained and trained from scratch, both as proxy and target. This is what we do in the next section.s
4 IDENTIFYING THE FACTORS THAT ENABLE ATTACK TRANSFERABILITY
In this section, we conduct a thorough ablation study and establish rigorously that SSL pretraining makes ASR models vulnerable to transferred attacks. We also measure the influence of several other factors on transferability. This ablation study requires the generation of many sets of adversarial examples,, using varying models as proxy, which would be computationally difficult with the improved attack introduced in section 3.1. Since we do not seek optimal performance, throughout this section we run the base attack in Section 2.2 with 1000 optimization steps.
4.1 INFLUENCE OF SELF-SUPERVISED LEARNING
In this section, we compare Wav2Vec2 models with varying amounts of pretraining data: 60k hours, 960h, or none at all. We use each model both as a proxy to generate adversarial noise and as a private model for evaluation with all other proxies.
As Wav2Vec2 models fine-tuned from scratch are not publicly available, we train our own models with no pretraining, using the Wav2Vec2 fine-tuning configurations on 960h of labeled data available in Fairseq (Ott et al., 2019). These configurations are likely suboptimal and our models achieve test-clean WERs of 9.1% (Large) and 11.3% (Base), much higher than the pretrained+fine-tuned Wav2Vec2 models. This performance discrepancy could affect the fairness of our comparison. We therefore add to our experiments Wav2Vec2 Base models fine-tuned on 1h and 10h of labeled data only. These models achieve test-clean WERs of 24.5% and 11.1%. Therefore we can observe the influence of SSL pretraining by taking model architecture and performance out of the equation.
Our attacks are not as strong as in section 3.1, and only have limited effect on the targeted WER. Therefore we evaluate results at the character level, which offers much finer granularity. For reference, we observe that the CER between two random pairs of sentences in LibriSpeech is 80-85% on average. Therefore attack success rates higher than 20% (i.e CER < 80% with the target) indicate a partially successful attack. We report those results in Table 2. Results in italic correspond to cases where attacked model is the proxy or was fine-tuned from the same pretrained representation, and therefore do not correspond to a transferred attack.
These results show unambiguously that SSL pretraining plays a huge role in the transferability of adversarial attacks. Adversarial examples generated on the pretrained Wav2Vec2 models fine-tuned on 960h are partially successful on all pretrained models (success rate in the 25-46% range). They are however ineffective on the ASR models trained from scratch (4-8%). Similarly, models trained from scratch are bad proxies for pretrained models (2-3%) and even for each other (19-22%).
It follows that SSL pretraining is a necessary condition for transferable adversarial examples in both the proxy and the private model. We confirm it by plotting in Figure 3a the evolution of the target loss while generating one adversarial example. We display the loss for the proxy model (blue) and two private models. The loss of the pretrained private model (red) converges to a much lower value than the non-pretrained model (yellow).
SSL pretraining is however not a sufficient condition for attack transferability, and other factors play a role as well. For instance, the Base model fine-tuned on just 10h and 1h are ineffective proxies: so strong ASR models are likely better proxies than weaker ones.
4.2 INFLUENCE OF PRETRAINING DATA
As observed in Section 3 models that were (pre)trained on different data than the proxies can still be affected by transferred attacks. We analyse this effect in more details in this section. We focus on five Wav2Vec2-Large models. One is pretrained and fine-tuned on LibriSpeech. One is pre-
trained on LibriLight and fine-tuned on LibriSpeech. Two are pretrained on LV60k, CommonVoice, SwitchBoard and Fisher, and fine-tuned respectively on LibriSpeech and SwitchBoard. Finally one is pretrained and finetuned on CommonVoice (English-only). As in the previous section, we evaluate every combination of proxy and target models.
We report the results in Table 3. We observe that most pairs of proxy and private models lead to important partial transferability. The major exception is the CommonVoice-only model, which does not succeed as a proxy for other models (0-8% success rate). In contrast, it is vulnerable to attacks transferred from other models, including those that do not have CommonVoice in their training data. We also note that models pretrained on Lbri-Light or more (60+khs) are better proxies, and more vulnerable ot attacks, than the LibriSpeech-only and CommonVoice-only model. In other words the vulnerability that we point out is worsened rather than mitigated by increasing amounts of available data.
4.3 MODEL SIZE AND TRAINING HYPERPARAMETERS
We now extend our ablation study to models pretrained with different SSL paradigms. We report the results in Table 4. We observe that adversarial examples also transfer between models trained with different paradigms. Moreover, At equal pretraining data all models are not equal proxies, and the HuBERT Large model (pretrained on 60kh) is the best proxy by a large margin.
5 A HYPOTHESIS FOR THE VULNERABILITY OF SSL-PRETRAINED MODELS
We have established a link between adversarial transferability and the SSL pretaining of ASR models. In this section we propose a hypothesis explaining that link. We first show in Section 5.1, with empirical justification, that attacks with a very precise target are much harder to transfer everything
else being equal, explaining why targeted ASR attacks are usually nontransferable. Then in Section 5.2 we suggest ways in which SSL alleviates these difficulties, thus recovering some transferability.
5.1 AT EQUAL WHITE-BOX SUCCESS, VERY TARGETED ATTACKS ARE HARDER TO TRANSFER
Targeted attacks on CIFAR10 force the model to predict one out of 10 different labels. Targeted attacks on ASR models force the model to transcribe one of all the possible transcriptions: With sequences of just five English words the number of possibilities is equal to 1700005 ∼ 1026. We can call such an attack ”very targeted”, by contrast to more ”mildly targeted” attacks on CIFAR10.
We hypothesize that the target precision, or ”how targeted” the attack is, negatively affects its transferability success rate, explaining why targeted ASR attacks do not transfer easily. To demonstrate it empirically, we can imagine an experiment where an attacker tries to run a very targeted attack on CIFAR10. We hypothesize that in such a case, the transferred attack success rate would drop even if the white box attack success rate remains high. Inversely, if we designed a ”mildly targeted” attack on ASR models, we would expect it to achieve a non-trivial transferability success rate. We designed experiments for both cases, which we summarize below. Complete experimental details and results are provided in Appendix B.
5.1.1 VERY TARGETED ATTACKS ON CIFAR10
We run an attack on a ResNet CIFAR10 model. We do not just enforce the model’s most probable output (top1 prediction) but the first k most probable outputs (topk prediction). For example with k = 3, given an image of an airplane, the attack objective could be to modify the image such that the most probable model output is ”car”, the second most probable is ”bird” and the third is ”frog”. Our attack algorithm sets a ”target distribution” of classes, then minimizes the KL divergence of the model’s probabilistic outputs and the target, using Projected Gradient Descent. The success rate is evaluated by matching the top k predictions and the top k targets.
We compute the L∞ attack success rate (ϵ = 0.03) for both white-box and transferred attacks as a function of the ”target precision” k. For k = 1, we measure a transferability success rate above 30%. However, as k increases, the transferability success rate drops close to 10%, which is the success threshold that a random model would achieve. In other words, the transferability becomes null as k increases. Meanwhile, the white box attack success rate remains above 95%. Therefore very targeted attacks on images do not transfer.
5.1.2 MILDLY TARGETED ATTACKS ON ASR
We train five small Conformer models on LibriSpeech. On each of them we generate targeted adversarial examples. The target objective is simply to prepend the word ”But” to the original transcription. This makes for a much less targeted attack as is traditionally done with ASR. The attack success rate is evaluated simply by checking the presence of the word ”But” at the beginning of the prediction. We restrict evaluation to inputs whose transcription does not start with that word.
For each model, we generate 100 adversarial examples and evaluate them on all 4 other models. We thus obtain 20 different transferability success rates. The average of these scores is 18% with a standard deviation of 4.7%. Therefore mildly targeted attacks on ASR transfer substantially better than regular, very targeted attacks. Equivalent experiments with very targeted ASR attacks are reported in Abdullah et al. (2021b): the word-level transferability success rate is 0%.
5.2 VERY TARGETED TRANSFERABILITY REQUIRES IMPORTANT FEATURE OVERLAP
Why would very targeted attacks transfer less? As Ilyas et al. (2019) show, statistically meaningful patterns in the training data may be ”robust” (i.e. resilient to small perturbations) or non-robust. By leveraging non-robust features attackers can generate adversarial perturbations - and as these features can be learned by any models, these perturbations will transfer. The underlying assumption behind this framework is that all models learn the same features. In practice, two seperate models do not learn identical features due to randomness in training. But if they are ”close enough”, i.e. if the feature overlap between both models is important, then transferability will be observed.
It therefore makes perfect sense that more targeted attacks would transfer less. The more precise and difficult the attack objective is, the more features the attacker will depend on to achieve it. This increases the amount of feature overlap needed between proxy and private model for the attack to transfer. In the case of targeted ASR attacks, the required overlap is considerable. We hypothesize that SSL pretraining increases the feature overlap between ASR models. As empirically verifying it would pose important difficulties, we propose a high-level justification of that hypothesis.
ASR training aims at learning a representation that enables speech transcription. A subset of all features is sufficient to achieve this objective: for instance, there are lots of redundancies between low-frequency and high-frequency features, and a human listener can easily transcribe speech where most frequencies have been filtered out. The set of features learned by ASR models is therefore underspecified: two models even very similar identically may learn representations with little overlap.
Self-Supervised Learning on the other hand does not only learn useful features for transcription but features needed for predicting the input itself : parts of the input are masked, then they (or their quantized or clusterized form) are predicted using context. Arguably this much more ambitious objective requires the network to learn as many features as possible. In fact, the goal of such pretraining is to learn useful representations not just for ASR but any downstream task - i.e. ”exhaustive” representations. Intuitively, different models trained in that way would share many more features than ASR models trained from scratch - leading to more transferable adversarial examples.
6 RELATED WORK
The transferability of adversarial attacks has been known for many years in Image Classification (Papernot et al., 2016). On ASR it has been limited to simple attack objectives, like preventing WakeWord detection in Alexa (Li et al., 2019) or signal processing-based attacks (Abdullah et al., 2021a; 2022b). When it comes to optimization-based attacks on large ASR models, transferability claims are usually limited and focus on untargeted attacks (Wu et al., 2022). In very specific cases there have been limited claims of targeted, transferable attacks, such as Yuan et al. (2018); however, this work does not focus on imperceptible attacks with small amounts of noise, but rather attacks embedded in music. When it comes to standard targeted optimization attacks, Abdullah et al. (2021b) have shown that they display no transferability on DeepSpeech2 models, even when the proxy and the attacked model are trained with identical hyperparameters apart from the initial random seed.
Past ASR adversarial attacks usually focus on a handful of neural architectures, typically DeepSpeech2 (et al., 2016), sometimes Listen Attend and Spell (Chan et al., 2016). Only recently have attacks been extended to multiple recent architectures for a fair comparison between models (Lu et al., 2021; Olivier & Raj, 2022; Wu et al., 2022). Most related to this work is Wu et al. (2022), which focuses on the vulnerability of SSL speech models. They however focus on attacking the base pretrained model with untargeted noise that remains effective on downstream tasks. We study targeted attacks, with a much deeper focus on transferability between different models. Olivier & Raj (2022) have hinted that Wav2Vec2 models are vulnerable to transferred attacks, but only report limited results on two models and do not investigate the cause of that phenomenon. We attribute it to SSL pretraining and back our claims empirically.
Abdullah et al. (2022a) have identified factors that hinder transferability for ASR attacks, such as MFCC features, Recurrent Neural Networks, and large output sizes. Since Wav2Vec2 is a CNNTransformer model with character outputs: this gives it a better prior than DeepSpeech2 to achieve transferable adversarial attacks. However, according to that paper, this should be far from sufficient to obtain transferable attacks: our results differ in the case of SSL-pretrained models.
7 CONCLUSION
We have shown that ASR targeted attacks are transferable between SSL-pretrained ASR models. Direct access to their weights is no longer required to fool models to predict outputs of the attacker’s choice - and to an extent, knowledge of its training data is not required either. With that in mind, and given the existence of over-the-air attack algorithms, we expect attacks against ASR models to become a practical, realistic threat as soon as Wav2Vec2-type models are deployed in production.
In that context, it is paramount to develop adversarial defense mechanisms for ASR models. Fortunately, such defenses already exist, but they come at the cost of a tradeoff in model performance. We illustrate it in appendix E. Further research should be carried out into mitigating that tradeoff and adapting to ASR the most effective defenses in image classification, such as adversarial training.
A EXPERIMENTAL DETAILS FOR LIBRISPEECH EXPERIMENTS
A.1 FRAMEWORKS
We compute adversarial examples using the robust speech framework (Olivier & Raj, 2022). This library uses Speechbrain (Ravanelli et al., 2021) to load and train ASR models and offers implementations of various adversarial attack algorithms. Models and attacks are implemented using PyTorch (Paszke et al., 2019).
We use robust speech for evaluation on SpeechBrain-supported models. In section 3 we export a HuggingFace Dataset (Lhoest et al., 2021), then evaluate models via the HuggingFace Transformers (et al., 2020) library. Finally, we use Fairseq (Ott et al., 2019) for training models from scratch
All of our robust speech and Fairseq configurations are released alongside this article.
A.2 ATTACK HYPERPARAMETERS
We exploit the Carlini&Wagner attack (see section 2.2) implemented in robust speech, with the following hyperparameters:
• initial ϵ: 0.015 (and 0.04 in appendix D.1)
• learning rate: 0005
• number of decreasing ϵ values: 1
• Regularization constant c: 10
• optimizer: SGD
• attack iterations: 10000 in section 3.1, 1000 in section 4
A.3 DATASET AND TARGETS
Our adversarial dataset in section 3.1 consists of 85 sentences from the LibriSpeech test-clean set. To extract these sentences we take the first 200 sentences in the manifest, then keep only those shorter than 7 seconds. In section 4, we take the first 100 sentences and filter those shorter than 14 seconds.
As attack targets, we use actual LibriSpeech sentences sampled from the test-other set. Our candidate targets are:
• Let me see how can i begin
• Now go I can’t keep my eyes open
• So you are not a grave digger then
• He had hardly the strength to stammer
• What can this mean she said to herself
• Not years for she’s only five and twenty
• What does not a man undergo for the sake of a cure
• It is easy enough with the child you will carry her out
• Poor little man said the lady you miss your mother don’t you
• At last the little lieutenant could bear the anxiety no longer
• Take the meat of one large crab scraping out all of the fat from the shell
• Tis a strange change and I am very sorry for it but I’ll swear I know not how to help it
• The bourgeois did not care much about being buried in the Vaugirard it hinted at poverty pere Lachaise if you please
For each sentence we attack, we assign the candidate target with the closest length to the sentence’s original target.
A.4 MODELS
A.4.1 TRAINING WAV2VEC2 MODELS FROM SCRATCH
We use Fairseq to train Base and Large Wav2Vec2 models from scratch. Unfortunately, no configuration or pretrained weights have been released for that purpose, and we resort to using Wav2Vec2 fine-tuning configurations while simply skipping the pretraining step. Despite our attempts to tune training hyperparameters, we do not match the expected performance of a Wav2Vec2 model trained from scratch: (Baevski et al., 2020) report a WER of 3.0% for a large model, while we only get 9.1%.
A.4.2 GENERATING ADVERSARIAL EXAMPLES
Wav2Vec2, HuBERT and Data2Vec models are all supported directly in robust speech and are therefore those we use for generating adversarial examples. We use the HuggingFace backend of Speechbrain for most pretrained models, and its Fairseq backend for a few (Wav2Vec2-Base models finetuned on 10h and 1h, and models trained from scratch). In both cases, the model’s original tokenizer cannot be loaded in SpeechBrain directly. Therefore, we fine-tune the final projection layer of each model on 1h of LibriSpeech train-clean data.
The Wav2Vec2 model pretrained and fine-tuned on CommonVoice is a SpeechBrain original model. Similarly, we fine-tune it on 1h of LibriSpeech data as a shift from the CommonVoice output space to the LibriSpeech one. As a result, all our models share the same character output space.
A.4.3 EVALUATING PRETRAINED MODELS
In section 3, we directly evaluate models from HuggingFace Transformers and SpeechBrain on our adversarial dataset, without modification.
B EXPERIMENTAL DETAILS AND RESULTS FOR SMALL-SCALE EXPERIMENTS
This section describes the experimental details used in section 5.
B.1 CIFAR10 EXPERIMENTS
We use a pretrained ResNet18 as proxy, and a pretrained ResNet50 as private model.
Our ”very targeted attack” PGDk consists in applying the following steps for each input:
• target selection. We sample uniformly an ordered subset of k classes out of 10 (E.g. with k = 3: (2, 5, 6)). We also sample a point uniformly on the unit k-simplex {x1, ..., xk ∈ [0, 1]n/ ∑ i Xi = 1}, by sampling from an exponential distribution and
normalizing (Onn & Weissman, 2011) (e.g. (0.17, 0.55, 0.28)). We combine the two to obtain a 10-dimensional vectors with zero probability on all but the selected k classes (y = (0, 0.17, 0, 0, 0.55, 0.28, 0, 0, 0, 0)). This is our target.
• During the attack, we use Projected Gradient Descent (Madry et al., 2018) to minimize the KL divergence KL(f(x), y) between the softmax output and the target, within L2 radius ϵ = 0.5. We use learning rate 0.1 for k ∗ 1000 attack steps.
• We measure attack success rate by measuring the top-k match between f(x) and y:
acc = 1
k k∑ i=1 1[argsort(f(x))i = argsort(y)i
with argsort(y) returning the indices of the sorted elements of y in decreasing order. For instance f(x) = (0.1, 0.05, 0.05, 0.05, 0.35, 0.2, 0.05, 0.05, 0.05, 0.05) would get an accuracy of 0.666, as the top 2 classes match with y but not the third.
We evaluate attacks on 256 random images from the CIFAR10 dataset. For each value of k between 1 and 10 we repeat the experiment 3 times and average the attack success rates. In figure 2 we plot
the L∞ attack success rate (ϵ = 0.03) for both white-box and transferred attacks as a function of the ”target precision” k.
B.2 MILDLY TARGETED ASR ATTACKS
We train 5 identical conformer encoder models with 8 encoder layers, 4 attention heads, and hidden dimension 144. We train them with CTC loss for 30 epochs on the LibriSpeech train-clean-100 set, with different random seeds.
We run a L2-PGD attack with SNR bound 30dB, in which we minimize the cross-entropy loss between the utterance and its transcription prepended with the word ”But”. The utterances we attack are the first 100 sentences in the LibriSpeech test-clean set, to which we remove 7 sentences already starting with the word ”But”. We generate adversarial examples using each of the 5 models as proxy, and evaluate these examples on all 5 models. We report the full results in Table 5.
C FULL RESULTS TABLE FOR CROSS-MODEL ATTACKS
Table 6 completes the ablation study in Section 4 by evaluating all pairwise Proxy-Model combinations in our pool of Wav2Vec2-type models.
D INFLUENCE OF HYPERPARAMETERS ON ATTACK RESULTS
D.1 ATTACK RADIUS
In Table 7 we extend the results of Table 1 by comparing attack results for two different attack radii. These radii are ϵ = 0.015 and ϵ = 0.04, corresponding respectively to Signal-Noise Ratios of
30dB and 22dB respectively. The former is identical to Table 7; the latter is substantially larger, and corresponds to a more easily perceptible noise.
Looking at the white-box attack results on the proxy models the difference is drastic: with larger noise the targeted success rate jumps from 88% to 98%. The transferred attack results on SSLpretrained models also increase overall, with success increases ranging from 0% (Wav2Vec2-Large) to 20% (Data2Vec-Large) with a median increase of 10%. Crucially however, the targeted success does not increase at all and even decreases for ASR models trained from scratch. This confirms that there is a structural difference between the robustness of ASR models with and without SSL, that cannot be bridged simply by increasing the attack strength.
D.2 LANGUAGE MODELS
In section 3 we report the results of our adversarial dataset on multiple Wav2Vec2-type models, enhanced with an N-gram language model whenever available. In Table 8 we evaluate the influence of that language model on attack results.
We observe that the attack success rate systematically increases by 8 to 17% when adding a language model to the ASR model. This is understandable considering that our targets are sound English sentences: if a model tends to transcribe that target with mistakes, the language model can bridge that
gap. To put it differently, the more prone an ASR model is to output sentences in a given distribution, the more vulnerable it is to attacks with targets sampled from that distribution. Language models are therefore more of a liability than a defense against attacks, and most likely so would be many tricks applied to an ASR model in order to improve its general performance.
D.3 EFFECT OF MODEL REGULARIZATION ON TRANSFERABILITY
As mentioned in Section 2.2 we use regularization tricks like dropout in all proxy models when optimizing the adversarial perturbation. In Figure 3b we plot the loss on proxy and private models without that regularization, for comparison with Figure 3a. We observe that the loss degrades significantly on private models without regularization.
On the other hand, the loss on the proxy converges much faster in Figure 3b: removing model regularization makes for better, faster white-box attacks, at the cost of all transferability. To the extent of our knowledge, past work like Carlini & Wagner (2018) have not used regularization for generation, explaining why they report better white-box attacks than we do in terms of WER and SNR. However, as we have established above, applying regularization against standard ASR models does not lead to transferable adversarial examples: for that SSL pretraining is also required.
E DEFENDING AGAINST ADVERSARIAL EXAMPLES
Although we have shown that adversarial attacks can represent an important threat for private, SSLbased ASR models, it is possible to defend against them. Randomized smoothing Cohen et al. (2019) is a popular adversarial defense that has been applied to ASR in the past Olivier & Raj (2021) and comes with some robustness guarantees. It consists in applying to the inputs, before feeding them to the model, amounts of random gaussian noise that are significantly larger than potential adversarial perturbations in L2 norm. For reference we try applying it on some of our models.
We follow (Olivier & Raj, 2021) and enhance randomized smoothing with a-priori SNR estimation and ROVER voting (with 8 outputs) to boost performance. We use gaussian deviation σ = 0.02. For evaluation, we simply check the effect of our adversarial examples generated in section 3.1 on the smoothed model. A rigorous evaluation would require us to design adaptive attacks Athalye et al. (2018); Tramer et al. (2020); since this paper does not focus on claiming robustness to attacks, we restrict ourselves to a simpler setting.
We report our results in Table 9 for the Wav2Vec2-Base, Wav2Vec2-Large and Data2Vec-Large models, pretrained and fine-tuned on 960h of LibriSpeech training data. We observe that randomized smoothing is sufficient to block the targeted attack completely (0% success rate) and recover most of the original transcription (the untargeted success rate drops to 14-34% depending on the model). However, due to the addition of gaussian noise on all inputs the defense takes a toll on the performance on clean data: the WER jumps by 4-10%. The standard deviation σ controls this tradeoff between robustness and performance; we chose the value of σ that minimizes the untargeted success rate.
Unsurprisingly, randomized smoothing is a promising protection against transferred attacks, but it does leave room for improvement. These results illustrate the need for additional research on adversarial defenses. | 1. What is the focus and contribution of the paper regarding targeted and transferable adversarial examples?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its potential impact on developing more robust ASR systems?
3. How do the reviewer's questions and comments relate to the paper's content, and what additional aspects does the reviewer suggest exploring?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes targeted and transferable adversarial examples for self-supervised asr models which are in pretraining + fine-tuning architecture. An adversary can make use of the transferability property, that is, an adversarial sample produced for a proxy asr can also fool a different remote asr. Rich self-supervised asr models, such as wav2vec2, Hubert, data2vec and wavlm are investigated and similar results are shown with detailed experiments and comparisons.
Strengths And Weaknesses
Strong:
1 The targeted and transferable adversarial examples are interesting and should benefit the research ad developing of better and more robust asr systems of pretraining+fine-tuning architecture.
2 Rich existing sota models are investigated showing the “transferability” is a frequent thing.
3 Code is attached making the paper to be with high score of reproducing ability.
Weak:
1 Prefer to see richer experiments on language models using quite different datasets of among different languages to better learn the transferability among pretrained speech representations;
2 If solutions to these well-designed adversarial examples can be provided, this paper will be rich of novel solutions as well.
Detailed questions and comments
1 So, how to improve current self-supervised models to better dealing with the transferable adversarial examples? This is more valuable for building robust asr systems.
2 Can your asr attacking strategies influencing other people’s normal usage of existing asr systems? Say, you prepared a special group of inputs and all asr systems failed – then does this influence other people’s usage or if same types of attacking data were used for training these models, then they will fail forever if not fixed. Basing on my experience, most asr systems are still quite fragile – they fail a lot even with quite clean and clear voice inputs and they will for sure fail if the inputs are further including rich carefully designed noises.
3 Any further comparison of the transferability of from one language’s pretrained model to another language’s pretrained model? Say how large the datasets used are involved in the transferability?
Clarity, Quality, Novelty And Reproducibility
this paper is clearly written with code attachment - a high reproducibility. The idea of transferability adversarial examples is interesting. This paper can be scored higher if with solutions to the adversarial examples (or investigating existing anti-attach methods) |
ICLR | Title
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models
Abstract
A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition (ASR) system to output attacker-chosen text. To exploit ASR models in real-world, black-box settings, an adversary can leverage the transferability property, i.e. that an adversarial sample produced for a proxy ASR can also fool a different remote ASR. However recent work has shown that transferability against large ASR models is very difficult. In this work, we show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability. We successfully demonstrate this phenomenon by evaluating state-of-the-art self-supervised ASR models like Wav2Vec2, HuBERT, Data2Vec and WavLM. We show that with low-level additive noise achieving a 30dB Signal-Noise Ratio, we can achieve target transferability with up to 80% accuracy. Next, we 1) use an ablation study to show that Self-Supervised learning is the main cause of that phenomenon, and 2) we provide an explanation for this phenomenon. Through this we show that modern ASR architectures are uniquely vulnerable to adversarial security threats.
N/A
A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition (ASR) system to output attacker-chosen text. To exploit ASR models in real-world, black-box settings, an adversary can leverage the transferability property, i.e. that an adversarial sample produced for a proxy ASR can also fool a different remote ASR. However recent work has shown that transferability against large ASR models is very difficult. In this work, we show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability. We successfully demonstrate this phenomenon by evaluating state-of-the-art self-supervised ASR models like Wav2Vec2, HuBERT, Data2Vec and WavLM. We show that with low-level additive noise achieving a 30dB Signal-Noise Ratio, we can achieve target transferability with up to 80% accuracy. Next, we 1) use an ablation study to show that Self-Supervised learning is the main cause of that phenomenon, and 2) we provide an explanation for this phenomenon. Through this we show that modern ASR architectures are uniquely vulnerable to adversarial security threats.
1 INTRODUCTION
Adversarial audio algorithms are designed to force Automatic Speech Recognition (ASR) models to produce incorrect outputs. They do so by introducing small amounts of imperceptible, carefully crafted noise to benign audio samples that can force the ASR model to produce incorrect transcripts. Specifically, targeted adversarial attacks (Carlini & Wagner, 2018; Qin et al., 2019) are designed to force ASR models to output any target sentence of the attacker’s choice. However, these attacks have limited effectiveness as they make unreasonable assumptions (e.g., white-box access to the model weights), which are unlikely to be satisfied in real-world settings.
An attacker could hypothetically bypass this limitation by using the transferability property of adversarial samples: they generate adversarial samples for a white-box proxy model; then pass these to a different remote black-box model, as we illustrate in Figure 1a. Transferability has been successfully demonstrated in other machine learning domains, like computer vision (Papernot et al., 2016). This is a sample text in black. Yet for ASR, recent work has shown that transferability is close to non-existent between large models Abdullah et al. (2021b), even between identically trained models (i.e., same training hyper-parameters, even including the random initialization seed). These findings were demonstrated on older ASR architectures, specifically on LSTM-based DeepSpeech2 models trained with CTC loss. However, robustness properties sometimes vary considerably between different ASR architectures (Lu et al., 2021; Olivier & Raj, 2022), and it is worth studying adversarial transferability on more recent families of models.
In this work, we evaluate the robustness of modern transformer-based ASR architectures. We show that many state-of-the-art ASR models are in fact vulnerable to the transferability property. Specifically, our core finding can be formulated as follows:
Pretraining transformer-based ASR models with Self-Supervised Learning (SSL) makes them vulnerable to transferable adversarial attacks.
SSL is an increasingly popular learning paradigm in ASR (Figure 1b), used to boost model performance by leveraging large amounts of unlabeled data. We demonstrate that it hurdles robustness by making the following contributions:
• First, we show that most public SSL-pretrained ASR models are vulnerable to transferability. We generate 85 adversarial samples for the proxy HuBERT and Wav2Vec2 models (Section 3). We show that these samples are effective against a wide panel of public transformer-based ASRs. This includes ASRs trained on different data than our proxies.
• Second, we show that SSL-pretraining is the reason for this vulnerability to transferability. We do so using an ablation study on Wav2Vec2-type models.
• Third, we propose an explanation for this curious phenomenon. We argue that targeted ASR attacks need considerable feature overlap to be transferable; and that SSL objectives encourage such feature overlap between different models.
Our results show that SSL, a line of work gathering attention in the ASR community that has pushed the state-of-the-art on many benchmarks, is also a source of vulnerability. Formerly innocuous attacks with unreasonable assumptions are now effective against many modern models. As it is likely that SSL will be used to train ASR systems in production, our results pave the way for practical, targeted attacks in the real world. By no means do these results imply that this line of work should be aborted, but they emphasize the pressing need to focus on robustness alongside performance.
2 BACKGROUND
2.1 SSL PRETRAINING FOR ASR MODELS
We describe in this Section the principles of SSL-pretrained ASR models, whose robustness to attacks we evaluate in this work. These models usually follow the neural architecture of Wav2Vec2 (Baevski et al., 2020). Raw audio inputs are fed directly to a CNN. A Transformer encodes the CNN outputs into contextualized representations. A final feed-forward network projects these representations in a character output space. The model is fine-tuned with CTC loss (Graves et al., 2006).
A number of different models follow this architecture, including Wav2Vec2, HuBERT (Hsu et al., 2021), Data2Vec (Baevski et al., 2022), UniSpeech-SAT (Wang et al., 2021; Chen et al., 2021b) or WavLM (Chen et al., 2021a). These networks only have very minor differences in their architectures, to the point that standardized sizes are used for all of them. Base models have 12 transformer hidden layers and 90M parameters. Large models have 24 layers and 300M parameters. Finally, XLarge models have 48 layers for a total of 1B parameters.
While the networks are similar, the training pipelines of these models differ substantially. All models are pretrained on large amounts of unlabeled data, then fine-tuned for ASR on varying quantities of labeled data. The pretraining involves SSL objectives, such as Quantization and Contrastive Learning (Wav2Vec2), offline clustering and masked predictions (HuBERT), or masked prediction
of contextualized labels (Data2Vec). Unispeech combines SSL and CTC pretraining with multitask learning. WavLM adds denoising objectives and scales to even greater amounts of unlabeled data.
SSL pretraining is helpful in many regards: it makes the same network easy to fine-tune for multiple downstream tasks with little labeled data and has improved state-of-the-art results in ASR benchmarks, especially in low-resource settings. As we demonstrate, it is also a source of vulnerabilities.
2.2 ADVERSARIAL ATTACKS
Adversarial examples are inputs modified imperceptibly by an attacker to fool machine learning models (Szegedy et al., 2014; Goodfellow et al., 2014; Carlini & Wagner, 2016; Madry et al., 2018). While most works have focused on image classification, several created of adapted attacks for other tasks such as ASR (Cisse et al., 2017; Carlini & Wagner, 2018; Qin et al., 2019).
The attack we use is based on the Carlini&Wagner ASR attack (Carlini & Wagner, 2018), although slightly simplified. Given an input x, a target transcription yt, and an ASR model f trained with loss L, our attack finds an additive perturbation δ optimizing the following objective:
min δ
L(f(x+ δ), yt) + c ∥δ∥22 s.t. ∥δ∥∞ < ϵ (1)
which we optimize using L∞ Projected Gradient Descent. While the CW attack typically uses a large initial ϵ, then gradually reduces it as it finds successful perturbations, we fix a single value of ϵ and optimize for a fixed number of iterations. We find that this scheme, closer to the PGD algorithm Madry et al. (2018), greatly improves attack transferability. However we keep using the L2 regularization term c ∥δ∥22 introduced in the CW attack. We also find that applying regularization such as dropout during attack optimization greatly helps to generate transferable perturbations. This effect is analyzed more in detail in Appendix D.3. Throughout the rest of the paper, we run all attack optimization steps using the default dropout, layer drop, etc. that the proxy model used during training (typically a dropout of 0.1).
3 TRANSFERABLE ATTACK ON STATE-OF-THE-ART ASR MODELS
In our core experiment, we fool multiple state-of-the-art SSL-pretrained ASR models with targeted and transferred adversarial attacks. We generate a small set of targeted audio adversarial examples using fixed proxy models. We then transfer those same examples on a large number of models available in the HuggingFace Transformers library. Table 1 specifies how much unlabeled and labeled data these models were trained on. We provide the full experimental details in appendix A.
3.1 GENERATING ADVERSARIAL EXAMPLES ON PROXIES
We describe our procedure to generate adversarial examples. To maximize the transferability success rate of our perturbations we improve the base attack in Section 2.2 in several key ways:
• To limit attack overfitting on our proxy, we combine the losses of two proxy models: Wav2Vec2 and HuBERT (LARGE). Both models were pretrained on the entire LV60k dataset and finetuned on 960h of LibriSpeech. As these models have respectively a contrastive and predictive objective, they are a representative sample of SSL-pretrained ASR models. The sum of their losses is used as the optimization objective in Equation 1.
• We use 10000 optimization steps, which is considerable (for comparison Carlini & Wagner (2018) use 4000) and can also lead to the adversarial noise overfitting the proxy models. To mitigate this effect we use a third model, the Data2Vec BASE network trained on LibriSpeech, as a stopping criterion for the attack. At each attack iteration, we feed our adversarial example to Data2Vec, and keep track of the best-performing perturbation (in terms of WER). We return that best perturbation at the end of the attack. Because this procedure is computationally expensive, we only apply it to a subset A of 85 utterances of less than 7 seconds. We sample them randomly in the LibriSpeech testclean set. We select attack targets at random: we sample a completely disjoint subset B of
utterances in the LibriSpeech test-other set. To each utterance in A we assign as target the transcription of the sentence in B whose length is closest to its own. This ensures that a very long target isn’t assigned to a very short utterance or vice versa.
3.2 TRANSFERRING ADVERSARIAL EXAMPLES ON ASR MODELS
We evaluate all SSL-pretrained models mentioned in Section 2.1, along with several others for comparison: the massively multilingual speech recognizer or M-CTC (Lugosch et al., 2022) trained with pseudo-labeling, and models trained from scratch for ASR: the Speech-to-text model from Fairseq (Wang et al., 2020) and the CRDNN and Transformer from SpeechBrain (Ravanelli et al., 2021).
3.3 METRICS
We evaluate the performance of ASR models with the Word-Error-Rate (WER) between the model output and the correct outputs.
When evaluating the success of adversarial examples, we can also use the Word-Error-Rate. Between the prediction and the attack target yt, a low WER indicates a successful attack. We therefore define the word-level targeted attack success rate as
TASR = max(1−WER(f(x+ δ), yt), 0) (2)
It is also interesting to look at the results of the attack in terms of denial-of-service, i.e. the attack’s ability to stop the model from predicting the correct transcription y. Here a high WER indicates a successful attack. We define the word-level untargeted attack success rate as
UASR = min(WER(f(x+ δ), y), 1) (3)
We can also compute the attack success rate at the character level, i.e. using the Character-ErrorRate (CER) instead of the Word-Error-Rate. Character-level metrics are interesting when using weaker attacks that affect the model, but not enough to reduce the targeted WER significantly. We use them in our ablation study in section 4.
Finally, we control the amount of noise in our adversarial examples with the Signal-Noise Ratio (SNR), defined as
SNR(δ, x) = 10 log( ∥x∥22 ∥δ∥22 ) (4)
for an input x and a perturbation δ. When generating adversarial examples we adjust the L∞ bound ϵ (equation 1 to achieve a target SNR.
3.4 RESULTS
We report the results of our adversarial examples in Table 1 for ϵ = 0.015, corresponding to a Signal-Noise Ratio of 30dB on average. In Appendix D.1 we also report results for a larger ϵ value.
On 12 out of 16 models, we observe that the attack achieves total denial-of-service: the untargeted success rate is 100%. Moreover, on the first 6 models (proxies aside), the targeted attack success rate ranges between 50% and 81%: the target is more than half correctly predicted! These results are in flagrant contradiction with past works on DeepSpeech2-like models, where even the slightest change in training leads to a total absence of targeted transferability between proxy and private model. Our private models vary from the proxies in depth, number of parameters and even training methods, yet we observe important transferability. However, these 6 models have all been pretrained on LibriSpeech or Libri-Light with SSL pretraining, i.e. the same data distribution as our proxies.
The following five models were pretrained on different datasets. One was pretrained on a combination of Libri-Light, VoxPopuli and GigaSpeech; two on Libri-Light, CommonVoice, SwitchBoard and Fisher; and two on CommonVoice either multilingual or English-only. The transferability success rate on these five models ranges from 18% to 67%, which is significant. Even the CommonVoice models, whose training data has no intersection with Libri-Light, are partially affected.
Although our inputs and attack targets are in English, we apply them to a French-only CommonVoice Wav2vec2. This model, incapable of decoding clean LibriSpeech data, is also unaffected by our targeted perturbation. It therefore seems that, while multilingual models are not robust to our examples, a minimal performance on the original language is required to observe transferability.
The final 4 models for which the targeted transferability rate is null or close to null, are those that were not SSL-pretrained at all (including M-CTC which was pretrained with pseudo-labeling). These four models also partially resist the untargeted attack.
It emerges from these results that some recent ASR models, specifically those pretrained with SSL, can be vulnerable to transferred attacks. These results diverge significantly from previous works like (Abdullah et al., 2021b; 2022a) which showed no transferability between different models. Table 1 hints that SSL pretraining plays an important role in transferability, but does not prove it: to do so we would need to compare models of identical architecture and performance, pretrained and trained from scratch, both as proxy and target. This is what we do in the next section.s
4 IDENTIFYING THE FACTORS THAT ENABLE ATTACK TRANSFERABILITY
In this section, we conduct a thorough ablation study and establish rigorously that SSL pretraining makes ASR models vulnerable to transferred attacks. We also measure the influence of several other factors on transferability. This ablation study requires the generation of many sets of adversarial examples,, using varying models as proxy, which would be computationally difficult with the improved attack introduced in section 3.1. Since we do not seek optimal performance, throughout this section we run the base attack in Section 2.2 with 1000 optimization steps.
4.1 INFLUENCE OF SELF-SUPERVISED LEARNING
In this section, we compare Wav2Vec2 models with varying amounts of pretraining data: 60k hours, 960h, or none at all. We use each model both as a proxy to generate adversarial noise and as a private model for evaluation with all other proxies.
As Wav2Vec2 models fine-tuned from scratch are not publicly available, we train our own models with no pretraining, using the Wav2Vec2 fine-tuning configurations on 960h of labeled data available in Fairseq (Ott et al., 2019). These configurations are likely suboptimal and our models achieve test-clean WERs of 9.1% (Large) and 11.3% (Base), much higher than the pretrained+fine-tuned Wav2Vec2 models. This performance discrepancy could affect the fairness of our comparison. We therefore add to our experiments Wav2Vec2 Base models fine-tuned on 1h and 10h of labeled data only. These models achieve test-clean WERs of 24.5% and 11.1%. Therefore we can observe the influence of SSL pretraining by taking model architecture and performance out of the equation.
Our attacks are not as strong as in section 3.1, and only have limited effect on the targeted WER. Therefore we evaluate results at the character level, which offers much finer granularity. For reference, we observe that the CER between two random pairs of sentences in LibriSpeech is 80-85% on average. Therefore attack success rates higher than 20% (i.e CER < 80% with the target) indicate a partially successful attack. We report those results in Table 2. Results in italic correspond to cases where attacked model is the proxy or was fine-tuned from the same pretrained representation, and therefore do not correspond to a transferred attack.
These results show unambiguously that SSL pretraining plays a huge role in the transferability of adversarial attacks. Adversarial examples generated on the pretrained Wav2Vec2 models fine-tuned on 960h are partially successful on all pretrained models (success rate in the 25-46% range). They are however ineffective on the ASR models trained from scratch (4-8%). Similarly, models trained from scratch are bad proxies for pretrained models (2-3%) and even for each other (19-22%).
It follows that SSL pretraining is a necessary condition for transferable adversarial examples in both the proxy and the private model. We confirm it by plotting in Figure 3a the evolution of the target loss while generating one adversarial example. We display the loss for the proxy model (blue) and two private models. The loss of the pretrained private model (red) converges to a much lower value than the non-pretrained model (yellow).
SSL pretraining is however not a sufficient condition for attack transferability, and other factors play a role as well. For instance, the Base model fine-tuned on just 10h and 1h are ineffective proxies: so strong ASR models are likely better proxies than weaker ones.
4.2 INFLUENCE OF PRETRAINING DATA
As observed in Section 3 models that were (pre)trained on different data than the proxies can still be affected by transferred attacks. We analyse this effect in more details in this section. We focus on five Wav2Vec2-Large models. One is pretrained and fine-tuned on LibriSpeech. One is pre-
trained on LibriLight and fine-tuned on LibriSpeech. Two are pretrained on LV60k, CommonVoice, SwitchBoard and Fisher, and fine-tuned respectively on LibriSpeech and SwitchBoard. Finally one is pretrained and finetuned on CommonVoice (English-only). As in the previous section, we evaluate every combination of proxy and target models.
We report the results in Table 3. We observe that most pairs of proxy and private models lead to important partial transferability. The major exception is the CommonVoice-only model, which does not succeed as a proxy for other models (0-8% success rate). In contrast, it is vulnerable to attacks transferred from other models, including those that do not have CommonVoice in their training data. We also note that models pretrained on Lbri-Light or more (60+khs) are better proxies, and more vulnerable ot attacks, than the LibriSpeech-only and CommonVoice-only model. In other words the vulnerability that we point out is worsened rather than mitigated by increasing amounts of available data.
4.3 MODEL SIZE AND TRAINING HYPERPARAMETERS
We now extend our ablation study to models pretrained with different SSL paradigms. We report the results in Table 4. We observe that adversarial examples also transfer between models trained with different paradigms. Moreover, At equal pretraining data all models are not equal proxies, and the HuBERT Large model (pretrained on 60kh) is the best proxy by a large margin.
5 A HYPOTHESIS FOR THE VULNERABILITY OF SSL-PRETRAINED MODELS
We have established a link between adversarial transferability and the SSL pretaining of ASR models. In this section we propose a hypothesis explaining that link. We first show in Section 5.1, with empirical justification, that attacks with a very precise target are much harder to transfer everything
else being equal, explaining why targeted ASR attacks are usually nontransferable. Then in Section 5.2 we suggest ways in which SSL alleviates these difficulties, thus recovering some transferability.
5.1 AT EQUAL WHITE-BOX SUCCESS, VERY TARGETED ATTACKS ARE HARDER TO TRANSFER
Targeted attacks on CIFAR10 force the model to predict one out of 10 different labels. Targeted attacks on ASR models force the model to transcribe one of all the possible transcriptions: With sequences of just five English words the number of possibilities is equal to 1700005 ∼ 1026. We can call such an attack ”very targeted”, by contrast to more ”mildly targeted” attacks on CIFAR10.
We hypothesize that the target precision, or ”how targeted” the attack is, negatively affects its transferability success rate, explaining why targeted ASR attacks do not transfer easily. To demonstrate it empirically, we can imagine an experiment where an attacker tries to run a very targeted attack on CIFAR10. We hypothesize that in such a case, the transferred attack success rate would drop even if the white box attack success rate remains high. Inversely, if we designed a ”mildly targeted” attack on ASR models, we would expect it to achieve a non-trivial transferability success rate. We designed experiments for both cases, which we summarize below. Complete experimental details and results are provided in Appendix B.
5.1.1 VERY TARGETED ATTACKS ON CIFAR10
We run an attack on a ResNet CIFAR10 model. We do not just enforce the model’s most probable output (top1 prediction) but the first k most probable outputs (topk prediction). For example with k = 3, given an image of an airplane, the attack objective could be to modify the image such that the most probable model output is ”car”, the second most probable is ”bird” and the third is ”frog”. Our attack algorithm sets a ”target distribution” of classes, then minimizes the KL divergence of the model’s probabilistic outputs and the target, using Projected Gradient Descent. The success rate is evaluated by matching the top k predictions and the top k targets.
We compute the L∞ attack success rate (ϵ = 0.03) for both white-box and transferred attacks as a function of the ”target precision” k. For k = 1, we measure a transferability success rate above 30%. However, as k increases, the transferability success rate drops close to 10%, which is the success threshold that a random model would achieve. In other words, the transferability becomes null as k increases. Meanwhile, the white box attack success rate remains above 95%. Therefore very targeted attacks on images do not transfer.
5.1.2 MILDLY TARGETED ATTACKS ON ASR
We train five small Conformer models on LibriSpeech. On each of them we generate targeted adversarial examples. The target objective is simply to prepend the word ”But” to the original transcription. This makes for a much less targeted attack as is traditionally done with ASR. The attack success rate is evaluated simply by checking the presence of the word ”But” at the beginning of the prediction. We restrict evaluation to inputs whose transcription does not start with that word.
For each model, we generate 100 adversarial examples and evaluate them on all 4 other models. We thus obtain 20 different transferability success rates. The average of these scores is 18% with a standard deviation of 4.7%. Therefore mildly targeted attacks on ASR transfer substantially better than regular, very targeted attacks. Equivalent experiments with very targeted ASR attacks are reported in Abdullah et al. (2021b): the word-level transferability success rate is 0%.
5.2 VERY TARGETED TRANSFERABILITY REQUIRES IMPORTANT FEATURE OVERLAP
Why would very targeted attacks transfer less? As Ilyas et al. (2019) show, statistically meaningful patterns in the training data may be ”robust” (i.e. resilient to small perturbations) or non-robust. By leveraging non-robust features attackers can generate adversarial perturbations - and as these features can be learned by any models, these perturbations will transfer. The underlying assumption behind this framework is that all models learn the same features. In practice, two seperate models do not learn identical features due to randomness in training. But if they are ”close enough”, i.e. if the feature overlap between both models is important, then transferability will be observed.
It therefore makes perfect sense that more targeted attacks would transfer less. The more precise and difficult the attack objective is, the more features the attacker will depend on to achieve it. This increases the amount of feature overlap needed between proxy and private model for the attack to transfer. In the case of targeted ASR attacks, the required overlap is considerable. We hypothesize that SSL pretraining increases the feature overlap between ASR models. As empirically verifying it would pose important difficulties, we propose a high-level justification of that hypothesis.
ASR training aims at learning a representation that enables speech transcription. A subset of all features is sufficient to achieve this objective: for instance, there are lots of redundancies between low-frequency and high-frequency features, and a human listener can easily transcribe speech where most frequencies have been filtered out. The set of features learned by ASR models is therefore underspecified: two models even very similar identically may learn representations with little overlap.
Self-Supervised Learning on the other hand does not only learn useful features for transcription but features needed for predicting the input itself : parts of the input are masked, then they (or their quantized or clusterized form) are predicted using context. Arguably this much more ambitious objective requires the network to learn as many features as possible. In fact, the goal of such pretraining is to learn useful representations not just for ASR but any downstream task - i.e. ”exhaustive” representations. Intuitively, different models trained in that way would share many more features than ASR models trained from scratch - leading to more transferable adversarial examples.
6 RELATED WORK
The transferability of adversarial attacks has been known for many years in Image Classification (Papernot et al., 2016). On ASR it has been limited to simple attack objectives, like preventing WakeWord detection in Alexa (Li et al., 2019) or signal processing-based attacks (Abdullah et al., 2021a; 2022b). When it comes to optimization-based attacks on large ASR models, transferability claims are usually limited and focus on untargeted attacks (Wu et al., 2022). In very specific cases there have been limited claims of targeted, transferable attacks, such as Yuan et al. (2018); however, this work does not focus on imperceptible attacks with small amounts of noise, but rather attacks embedded in music. When it comes to standard targeted optimization attacks, Abdullah et al. (2021b) have shown that they display no transferability on DeepSpeech2 models, even when the proxy and the attacked model are trained with identical hyperparameters apart from the initial random seed.
Past ASR adversarial attacks usually focus on a handful of neural architectures, typically DeepSpeech2 (et al., 2016), sometimes Listen Attend and Spell (Chan et al., 2016). Only recently have attacks been extended to multiple recent architectures for a fair comparison between models (Lu et al., 2021; Olivier & Raj, 2022; Wu et al., 2022). Most related to this work is Wu et al. (2022), which focuses on the vulnerability of SSL speech models. They however focus on attacking the base pretrained model with untargeted noise that remains effective on downstream tasks. We study targeted attacks, with a much deeper focus on transferability between different models. Olivier & Raj (2022) have hinted that Wav2Vec2 models are vulnerable to transferred attacks, but only report limited results on two models and do not investigate the cause of that phenomenon. We attribute it to SSL pretraining and back our claims empirically.
Abdullah et al. (2022a) have identified factors that hinder transferability for ASR attacks, such as MFCC features, Recurrent Neural Networks, and large output sizes. Since Wav2Vec2 is a CNNTransformer model with character outputs: this gives it a better prior than DeepSpeech2 to achieve transferable adversarial attacks. However, according to that paper, this should be far from sufficient to obtain transferable attacks: our results differ in the case of SSL-pretrained models.
7 CONCLUSION
We have shown that ASR targeted attacks are transferable between SSL-pretrained ASR models. Direct access to their weights is no longer required to fool models to predict outputs of the attacker’s choice - and to an extent, knowledge of its training data is not required either. With that in mind, and given the existence of over-the-air attack algorithms, we expect attacks against ASR models to become a practical, realistic threat as soon as Wav2Vec2-type models are deployed in production.
In that context, it is paramount to develop adversarial defense mechanisms for ASR models. Fortunately, such defenses already exist, but they come at the cost of a tradeoff in model performance. We illustrate it in appendix E. Further research should be carried out into mitigating that tradeoff and adapting to ASR the most effective defenses in image classification, such as adversarial training.
A EXPERIMENTAL DETAILS FOR LIBRISPEECH EXPERIMENTS
A.1 FRAMEWORKS
We compute adversarial examples using the robust speech framework (Olivier & Raj, 2022). This library uses Speechbrain (Ravanelli et al., 2021) to load and train ASR models and offers implementations of various adversarial attack algorithms. Models and attacks are implemented using PyTorch (Paszke et al., 2019).
We use robust speech for evaluation on SpeechBrain-supported models. In section 3 we export a HuggingFace Dataset (Lhoest et al., 2021), then evaluate models via the HuggingFace Transformers (et al., 2020) library. Finally, we use Fairseq (Ott et al., 2019) for training models from scratch
All of our robust speech and Fairseq configurations are released alongside this article.
A.2 ATTACK HYPERPARAMETERS
We exploit the Carlini&Wagner attack (see section 2.2) implemented in robust speech, with the following hyperparameters:
• initial ϵ: 0.015 (and 0.04 in appendix D.1)
• learning rate: 0005
• number of decreasing ϵ values: 1
• Regularization constant c: 10
• optimizer: SGD
• attack iterations: 10000 in section 3.1, 1000 in section 4
A.3 DATASET AND TARGETS
Our adversarial dataset in section 3.1 consists of 85 sentences from the LibriSpeech test-clean set. To extract these sentences we take the first 200 sentences in the manifest, then keep only those shorter than 7 seconds. In section 4, we take the first 100 sentences and filter those shorter than 14 seconds.
As attack targets, we use actual LibriSpeech sentences sampled from the test-other set. Our candidate targets are:
• Let me see how can i begin
• Now go I can’t keep my eyes open
• So you are not a grave digger then
• He had hardly the strength to stammer
• What can this mean she said to herself
• Not years for she’s only five and twenty
• What does not a man undergo for the sake of a cure
• It is easy enough with the child you will carry her out
• Poor little man said the lady you miss your mother don’t you
• At last the little lieutenant could bear the anxiety no longer
• Take the meat of one large crab scraping out all of the fat from the shell
• Tis a strange change and I am very sorry for it but I’ll swear I know not how to help it
• The bourgeois did not care much about being buried in the Vaugirard it hinted at poverty pere Lachaise if you please
For each sentence we attack, we assign the candidate target with the closest length to the sentence’s original target.
A.4 MODELS
A.4.1 TRAINING WAV2VEC2 MODELS FROM SCRATCH
We use Fairseq to train Base and Large Wav2Vec2 models from scratch. Unfortunately, no configuration or pretrained weights have been released for that purpose, and we resort to using Wav2Vec2 fine-tuning configurations while simply skipping the pretraining step. Despite our attempts to tune training hyperparameters, we do not match the expected performance of a Wav2Vec2 model trained from scratch: (Baevski et al., 2020) report a WER of 3.0% for a large model, while we only get 9.1%.
A.4.2 GENERATING ADVERSARIAL EXAMPLES
Wav2Vec2, HuBERT and Data2Vec models are all supported directly in robust speech and are therefore those we use for generating adversarial examples. We use the HuggingFace backend of Speechbrain for most pretrained models, and its Fairseq backend for a few (Wav2Vec2-Base models finetuned on 10h and 1h, and models trained from scratch). In both cases, the model’s original tokenizer cannot be loaded in SpeechBrain directly. Therefore, we fine-tune the final projection layer of each model on 1h of LibriSpeech train-clean data.
The Wav2Vec2 model pretrained and fine-tuned on CommonVoice is a SpeechBrain original model. Similarly, we fine-tune it on 1h of LibriSpeech data as a shift from the CommonVoice output space to the LibriSpeech one. As a result, all our models share the same character output space.
A.4.3 EVALUATING PRETRAINED MODELS
In section 3, we directly evaluate models from HuggingFace Transformers and SpeechBrain on our adversarial dataset, without modification.
B EXPERIMENTAL DETAILS AND RESULTS FOR SMALL-SCALE EXPERIMENTS
This section describes the experimental details used in section 5.
B.1 CIFAR10 EXPERIMENTS
We use a pretrained ResNet18 as proxy, and a pretrained ResNet50 as private model.
Our ”very targeted attack” PGDk consists in applying the following steps for each input:
• target selection. We sample uniformly an ordered subset of k classes out of 10 (E.g. with k = 3: (2, 5, 6)). We also sample a point uniformly on the unit k-simplex {x1, ..., xk ∈ [0, 1]n/ ∑ i Xi = 1}, by sampling from an exponential distribution and
normalizing (Onn & Weissman, 2011) (e.g. (0.17, 0.55, 0.28)). We combine the two to obtain a 10-dimensional vectors with zero probability on all but the selected k classes (y = (0, 0.17, 0, 0, 0.55, 0.28, 0, 0, 0, 0)). This is our target.
• During the attack, we use Projected Gradient Descent (Madry et al., 2018) to minimize the KL divergence KL(f(x), y) between the softmax output and the target, within L2 radius ϵ = 0.5. We use learning rate 0.1 for k ∗ 1000 attack steps.
• We measure attack success rate by measuring the top-k match between f(x) and y:
acc = 1
k k∑ i=1 1[argsort(f(x))i = argsort(y)i
with argsort(y) returning the indices of the sorted elements of y in decreasing order. For instance f(x) = (0.1, 0.05, 0.05, 0.05, 0.35, 0.2, 0.05, 0.05, 0.05, 0.05) would get an accuracy of 0.666, as the top 2 classes match with y but not the third.
We evaluate attacks on 256 random images from the CIFAR10 dataset. For each value of k between 1 and 10 we repeat the experiment 3 times and average the attack success rates. In figure 2 we plot
the L∞ attack success rate (ϵ = 0.03) for both white-box and transferred attacks as a function of the ”target precision” k.
B.2 MILDLY TARGETED ASR ATTACKS
We train 5 identical conformer encoder models with 8 encoder layers, 4 attention heads, and hidden dimension 144. We train them with CTC loss for 30 epochs on the LibriSpeech train-clean-100 set, with different random seeds.
We run a L2-PGD attack with SNR bound 30dB, in which we minimize the cross-entropy loss between the utterance and its transcription prepended with the word ”But”. The utterances we attack are the first 100 sentences in the LibriSpeech test-clean set, to which we remove 7 sentences already starting with the word ”But”. We generate adversarial examples using each of the 5 models as proxy, and evaluate these examples on all 5 models. We report the full results in Table 5.
C FULL RESULTS TABLE FOR CROSS-MODEL ATTACKS
Table 6 completes the ablation study in Section 4 by evaluating all pairwise Proxy-Model combinations in our pool of Wav2Vec2-type models.
D INFLUENCE OF HYPERPARAMETERS ON ATTACK RESULTS
D.1 ATTACK RADIUS
In Table 7 we extend the results of Table 1 by comparing attack results for two different attack radii. These radii are ϵ = 0.015 and ϵ = 0.04, corresponding respectively to Signal-Noise Ratios of
30dB and 22dB respectively. The former is identical to Table 7; the latter is substantially larger, and corresponds to a more easily perceptible noise.
Looking at the white-box attack results on the proxy models the difference is drastic: with larger noise the targeted success rate jumps from 88% to 98%. The transferred attack results on SSLpretrained models also increase overall, with success increases ranging from 0% (Wav2Vec2-Large) to 20% (Data2Vec-Large) with a median increase of 10%. Crucially however, the targeted success does not increase at all and even decreases for ASR models trained from scratch. This confirms that there is a structural difference between the robustness of ASR models with and without SSL, that cannot be bridged simply by increasing the attack strength.
D.2 LANGUAGE MODELS
In section 3 we report the results of our adversarial dataset on multiple Wav2Vec2-type models, enhanced with an N-gram language model whenever available. In Table 8 we evaluate the influence of that language model on attack results.
We observe that the attack success rate systematically increases by 8 to 17% when adding a language model to the ASR model. This is understandable considering that our targets are sound English sentences: if a model tends to transcribe that target with mistakes, the language model can bridge that
gap. To put it differently, the more prone an ASR model is to output sentences in a given distribution, the more vulnerable it is to attacks with targets sampled from that distribution. Language models are therefore more of a liability than a defense against attacks, and most likely so would be many tricks applied to an ASR model in order to improve its general performance.
D.3 EFFECT OF MODEL REGULARIZATION ON TRANSFERABILITY
As mentioned in Section 2.2 we use regularization tricks like dropout in all proxy models when optimizing the adversarial perturbation. In Figure 3b we plot the loss on proxy and private models without that regularization, for comparison with Figure 3a. We observe that the loss degrades significantly on private models without regularization.
On the other hand, the loss on the proxy converges much faster in Figure 3b: removing model regularization makes for better, faster white-box attacks, at the cost of all transferability. To the extent of our knowledge, past work like Carlini & Wagner (2018) have not used regularization for generation, explaining why they report better white-box attacks than we do in terms of WER and SNR. However, as we have established above, applying regularization against standard ASR models does not lead to transferable adversarial examples: for that SSL pretraining is also required.
E DEFENDING AGAINST ADVERSARIAL EXAMPLES
Although we have shown that adversarial attacks can represent an important threat for private, SSLbased ASR models, it is possible to defend against them. Randomized smoothing Cohen et al. (2019) is a popular adversarial defense that has been applied to ASR in the past Olivier & Raj (2021) and comes with some robustness guarantees. It consists in applying to the inputs, before feeding them to the model, amounts of random gaussian noise that are significantly larger than potential adversarial perturbations in L2 norm. For reference we try applying it on some of our models.
We follow (Olivier & Raj, 2021) and enhance randomized smoothing with a-priori SNR estimation and ROVER voting (with 8 outputs) to boost performance. We use gaussian deviation σ = 0.02. For evaluation, we simply check the effect of our adversarial examples generated in section 3.1 on the smoothed model. A rigorous evaluation would require us to design adaptive attacks Athalye et al. (2018); Tramer et al. (2020); since this paper does not focus on claiming robustness to attacks, we restrict ourselves to a simpler setting.
We report our results in Table 9 for the Wav2Vec2-Base, Wav2Vec2-Large and Data2Vec-Large models, pretrained and fine-tuned on 960h of LibriSpeech training data. We observe that randomized smoothing is sufficient to block the targeted attack completely (0% success rate) and recover most of the original transcription (the untargeted success rate drops to 14-34% depending on the model). However, due to the addition of gaussian noise on all inputs the defense takes a toll on the performance on clean data: the WER jumps by 4-10%. The standard deviation σ controls this tradeoff between robustness and performance; we chose the value of σ that minimizes the untargeted success rate.
Unsurprisingly, randomized smoothing is a promising protection against transferred attacks, but it does leave room for improvement. These results illustrate the need for additional research on adversarial defenses. | 1. What is the focus of the paper regarding adversarial attacks on ASR models?
2. What are the strengths of the proposed approach, particularly in contrast to previous studies?
3. What are the weaknesses of the paper, especially regarding the experiment section?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies transferable targeted adversarial attack on self-supervised ASR models. Target attack adds a small perturbation to a model such that the model makes the targeted prediction desired by the attacker instead of the correct one corresponding to the original input. Transferability refers to generalizing the attack to private models which have not been used to generate the adversarial perturbation.
Past studies show that targeted attack is hard to generalize for supervised ASR models. However, in contrast to previous findings, the authors demonstrate that such attack can in fact generalize to ASR models pre-trained with self-supervised learning using similar datasets. The authors then present a series of study to understand what leads to successful attack transfer, including self-supervised objective, models size, training data size, and specificity of the attack.
Strengths And Weaknesses
Strengths
The authors presented an interesting study with new observations contrary to previous studies of adversarial attacks on ASR models. This could open up a new direction of empirical and theoretical study to understand robustness in self-supervised setups
Many factors are studied to reason and hypothesize why self-supervised ASR models are more vulnerable to targeted adversarial attack, providing valuable data points for future work.
Attacks of different levels of specificity have also been considered to support the hypothesis.
A more generalizable scheme for optimizing the adversarial attack is proposed in this paper, which uses multiple proxy models and validates/selects checkpoints with a different model.
Weaknesses
While the authors have compared several different SSL models (wav2vec, hubert, data2vec, WavLM, UniSpeech), they all have very similar architectures (convolution encoder for waveform followed by transformer layers). The argument of “attack on SSL models are transferable when they are pre-trained on the same dataset” would be more convincing if the authors had considered another SSL that have a different model architecture (e.g., LSTM).
The experiments studying the effect of pre-training data could have been expanded. The authors showed in Table 1 that target attack performance is worse on W2V2-Large (CV), which is an interesting observation. The study will more complete if the authors include a) optimizing attack on CV and transfer to LV; b) optimizing attack on a model pre-trained on more datasets (CV+LV+Fisher checkpoint is available on Github) and transfer to one pre-trained on a subset of it; c) the reverse of b.
Clarity, Quality, Novelty And Reproducibility
The topic studied in this paper is novel. The paper is easy to follow and well stated. Experimental details are provided to facilitate reproduction. |
ICLR | Title
Watch What You Pretrain For: Targeted, Transferable Adversarial Examples on Self-Supervised Speech Recognition models
Abstract
A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition (ASR) system to output attacker-chosen text. To exploit ASR models in real-world, black-box settings, an adversary can leverage the transferability property, i.e. that an adversarial sample produced for a proxy ASR can also fool a different remote ASR. However recent work has shown that transferability against large ASR models is very difficult. In this work, we show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability. We successfully demonstrate this phenomenon by evaluating state-of-the-art self-supervised ASR models like Wav2Vec2, HuBERT, Data2Vec and WavLM. We show that with low-level additive noise achieving a 30dB Signal-Noise Ratio, we can achieve target transferability with up to 80% accuracy. Next, we 1) use an ablation study to show that Self-Supervised learning is the main cause of that phenomenon, and 2) we provide an explanation for this phenomenon. Through this we show that modern ASR architectures are uniquely vulnerable to adversarial security threats.
N/A
A targeted adversarial attack produces audio samples that can force an Automatic Speech Recognition (ASR) system to output attacker-chosen text. To exploit ASR models in real-world, black-box settings, an adversary can leverage the transferability property, i.e. that an adversarial sample produced for a proxy ASR can also fool a different remote ASR. However recent work has shown that transferability against large ASR models is very difficult. In this work, we show that modern ASR architectures, specifically ones based on Self-Supervised Learning, are in fact vulnerable to transferability. We successfully demonstrate this phenomenon by evaluating state-of-the-art self-supervised ASR models like Wav2Vec2, HuBERT, Data2Vec and WavLM. We show that with low-level additive noise achieving a 30dB Signal-Noise Ratio, we can achieve target transferability with up to 80% accuracy. Next, we 1) use an ablation study to show that Self-Supervised learning is the main cause of that phenomenon, and 2) we provide an explanation for this phenomenon. Through this we show that modern ASR architectures are uniquely vulnerable to adversarial security threats.
1 INTRODUCTION
Adversarial audio algorithms are designed to force Automatic Speech Recognition (ASR) models to produce incorrect outputs. They do so by introducing small amounts of imperceptible, carefully crafted noise to benign audio samples that can force the ASR model to produce incorrect transcripts. Specifically, targeted adversarial attacks (Carlini & Wagner, 2018; Qin et al., 2019) are designed to force ASR models to output any target sentence of the attacker’s choice. However, these attacks have limited effectiveness as they make unreasonable assumptions (e.g., white-box access to the model weights), which are unlikely to be satisfied in real-world settings.
An attacker could hypothetically bypass this limitation by using the transferability property of adversarial samples: they generate adversarial samples for a white-box proxy model; then pass these to a different remote black-box model, as we illustrate in Figure 1a. Transferability has been successfully demonstrated in other machine learning domains, like computer vision (Papernot et al., 2016). This is a sample text in black. Yet for ASR, recent work has shown that transferability is close to non-existent between large models Abdullah et al. (2021b), even between identically trained models (i.e., same training hyper-parameters, even including the random initialization seed). These findings were demonstrated on older ASR architectures, specifically on LSTM-based DeepSpeech2 models trained with CTC loss. However, robustness properties sometimes vary considerably between different ASR architectures (Lu et al., 2021; Olivier & Raj, 2022), and it is worth studying adversarial transferability on more recent families of models.
In this work, we evaluate the robustness of modern transformer-based ASR architectures. We show that many state-of-the-art ASR models are in fact vulnerable to the transferability property. Specifically, our core finding can be formulated as follows:
Pretraining transformer-based ASR models with Self-Supervised Learning (SSL) makes them vulnerable to transferable adversarial attacks.
SSL is an increasingly popular learning paradigm in ASR (Figure 1b), used to boost model performance by leveraging large amounts of unlabeled data. We demonstrate that it hurdles robustness by making the following contributions:
• First, we show that most public SSL-pretrained ASR models are vulnerable to transferability. We generate 85 adversarial samples for the proxy HuBERT and Wav2Vec2 models (Section 3). We show that these samples are effective against a wide panel of public transformer-based ASRs. This includes ASRs trained on different data than our proxies.
• Second, we show that SSL-pretraining is the reason for this vulnerability to transferability. We do so using an ablation study on Wav2Vec2-type models.
• Third, we propose an explanation for this curious phenomenon. We argue that targeted ASR attacks need considerable feature overlap to be transferable; and that SSL objectives encourage such feature overlap between different models.
Our results show that SSL, a line of work gathering attention in the ASR community that has pushed the state-of-the-art on many benchmarks, is also a source of vulnerability. Formerly innocuous attacks with unreasonable assumptions are now effective against many modern models. As it is likely that SSL will be used to train ASR systems in production, our results pave the way for practical, targeted attacks in the real world. By no means do these results imply that this line of work should be aborted, but they emphasize the pressing need to focus on robustness alongside performance.
2 BACKGROUND
2.1 SSL PRETRAINING FOR ASR MODELS
We describe in this Section the principles of SSL-pretrained ASR models, whose robustness to attacks we evaluate in this work. These models usually follow the neural architecture of Wav2Vec2 (Baevski et al., 2020). Raw audio inputs are fed directly to a CNN. A Transformer encodes the CNN outputs into contextualized representations. A final feed-forward network projects these representations in a character output space. The model is fine-tuned with CTC loss (Graves et al., 2006).
A number of different models follow this architecture, including Wav2Vec2, HuBERT (Hsu et al., 2021), Data2Vec (Baevski et al., 2022), UniSpeech-SAT (Wang et al., 2021; Chen et al., 2021b) or WavLM (Chen et al., 2021a). These networks only have very minor differences in their architectures, to the point that standardized sizes are used for all of them. Base models have 12 transformer hidden layers and 90M parameters. Large models have 24 layers and 300M parameters. Finally, XLarge models have 48 layers for a total of 1B parameters.
While the networks are similar, the training pipelines of these models differ substantially. All models are pretrained on large amounts of unlabeled data, then fine-tuned for ASR on varying quantities of labeled data. The pretraining involves SSL objectives, such as Quantization and Contrastive Learning (Wav2Vec2), offline clustering and masked predictions (HuBERT), or masked prediction
of contextualized labels (Data2Vec). Unispeech combines SSL and CTC pretraining with multitask learning. WavLM adds denoising objectives and scales to even greater amounts of unlabeled data.
SSL pretraining is helpful in many regards: it makes the same network easy to fine-tune for multiple downstream tasks with little labeled data and has improved state-of-the-art results in ASR benchmarks, especially in low-resource settings. As we demonstrate, it is also a source of vulnerabilities.
2.2 ADVERSARIAL ATTACKS
Adversarial examples are inputs modified imperceptibly by an attacker to fool machine learning models (Szegedy et al., 2014; Goodfellow et al., 2014; Carlini & Wagner, 2016; Madry et al., 2018). While most works have focused on image classification, several created of adapted attacks for other tasks such as ASR (Cisse et al., 2017; Carlini & Wagner, 2018; Qin et al., 2019).
The attack we use is based on the Carlini&Wagner ASR attack (Carlini & Wagner, 2018), although slightly simplified. Given an input x, a target transcription yt, and an ASR model f trained with loss L, our attack finds an additive perturbation δ optimizing the following objective:
min δ
L(f(x+ δ), yt) + c ∥δ∥22 s.t. ∥δ∥∞ < ϵ (1)
which we optimize using L∞ Projected Gradient Descent. While the CW attack typically uses a large initial ϵ, then gradually reduces it as it finds successful perturbations, we fix a single value of ϵ and optimize for a fixed number of iterations. We find that this scheme, closer to the PGD algorithm Madry et al. (2018), greatly improves attack transferability. However we keep using the L2 regularization term c ∥δ∥22 introduced in the CW attack. We also find that applying regularization such as dropout during attack optimization greatly helps to generate transferable perturbations. This effect is analyzed more in detail in Appendix D.3. Throughout the rest of the paper, we run all attack optimization steps using the default dropout, layer drop, etc. that the proxy model used during training (typically a dropout of 0.1).
3 TRANSFERABLE ATTACK ON STATE-OF-THE-ART ASR MODELS
In our core experiment, we fool multiple state-of-the-art SSL-pretrained ASR models with targeted and transferred adversarial attacks. We generate a small set of targeted audio adversarial examples using fixed proxy models. We then transfer those same examples on a large number of models available in the HuggingFace Transformers library. Table 1 specifies how much unlabeled and labeled data these models were trained on. We provide the full experimental details in appendix A.
3.1 GENERATING ADVERSARIAL EXAMPLES ON PROXIES
We describe our procedure to generate adversarial examples. To maximize the transferability success rate of our perturbations we improve the base attack in Section 2.2 in several key ways:
• To limit attack overfitting on our proxy, we combine the losses of two proxy models: Wav2Vec2 and HuBERT (LARGE). Both models were pretrained on the entire LV60k dataset and finetuned on 960h of LibriSpeech. As these models have respectively a contrastive and predictive objective, they are a representative sample of SSL-pretrained ASR models. The sum of their losses is used as the optimization objective in Equation 1.
• We use 10000 optimization steps, which is considerable (for comparison Carlini & Wagner (2018) use 4000) and can also lead to the adversarial noise overfitting the proxy models. To mitigate this effect we use a third model, the Data2Vec BASE network trained on LibriSpeech, as a stopping criterion for the attack. At each attack iteration, we feed our adversarial example to Data2Vec, and keep track of the best-performing perturbation (in terms of WER). We return that best perturbation at the end of the attack. Because this procedure is computationally expensive, we only apply it to a subset A of 85 utterances of less than 7 seconds. We sample them randomly in the LibriSpeech testclean set. We select attack targets at random: we sample a completely disjoint subset B of
utterances in the LibriSpeech test-other set. To each utterance in A we assign as target the transcription of the sentence in B whose length is closest to its own. This ensures that a very long target isn’t assigned to a very short utterance or vice versa.
3.2 TRANSFERRING ADVERSARIAL EXAMPLES ON ASR MODELS
We evaluate all SSL-pretrained models mentioned in Section 2.1, along with several others for comparison: the massively multilingual speech recognizer or M-CTC (Lugosch et al., 2022) trained with pseudo-labeling, and models trained from scratch for ASR: the Speech-to-text model from Fairseq (Wang et al., 2020) and the CRDNN and Transformer from SpeechBrain (Ravanelli et al., 2021).
3.3 METRICS
We evaluate the performance of ASR models with the Word-Error-Rate (WER) between the model output and the correct outputs.
When evaluating the success of adversarial examples, we can also use the Word-Error-Rate. Between the prediction and the attack target yt, a low WER indicates a successful attack. We therefore define the word-level targeted attack success rate as
TASR = max(1−WER(f(x+ δ), yt), 0) (2)
It is also interesting to look at the results of the attack in terms of denial-of-service, i.e. the attack’s ability to stop the model from predicting the correct transcription y. Here a high WER indicates a successful attack. We define the word-level untargeted attack success rate as
UASR = min(WER(f(x+ δ), y), 1) (3)
We can also compute the attack success rate at the character level, i.e. using the Character-ErrorRate (CER) instead of the Word-Error-Rate. Character-level metrics are interesting when using weaker attacks that affect the model, but not enough to reduce the targeted WER significantly. We use them in our ablation study in section 4.
Finally, we control the amount of noise in our adversarial examples with the Signal-Noise Ratio (SNR), defined as
SNR(δ, x) = 10 log( ∥x∥22 ∥δ∥22 ) (4)
for an input x and a perturbation δ. When generating adversarial examples we adjust the L∞ bound ϵ (equation 1 to achieve a target SNR.
3.4 RESULTS
We report the results of our adversarial examples in Table 1 for ϵ = 0.015, corresponding to a Signal-Noise Ratio of 30dB on average. In Appendix D.1 we also report results for a larger ϵ value.
On 12 out of 16 models, we observe that the attack achieves total denial-of-service: the untargeted success rate is 100%. Moreover, on the first 6 models (proxies aside), the targeted attack success rate ranges between 50% and 81%: the target is more than half correctly predicted! These results are in flagrant contradiction with past works on DeepSpeech2-like models, where even the slightest change in training leads to a total absence of targeted transferability between proxy and private model. Our private models vary from the proxies in depth, number of parameters and even training methods, yet we observe important transferability. However, these 6 models have all been pretrained on LibriSpeech or Libri-Light with SSL pretraining, i.e. the same data distribution as our proxies.
The following five models were pretrained on different datasets. One was pretrained on a combination of Libri-Light, VoxPopuli and GigaSpeech; two on Libri-Light, CommonVoice, SwitchBoard and Fisher; and two on CommonVoice either multilingual or English-only. The transferability success rate on these five models ranges from 18% to 67%, which is significant. Even the CommonVoice models, whose training data has no intersection with Libri-Light, are partially affected.
Although our inputs and attack targets are in English, we apply them to a French-only CommonVoice Wav2vec2. This model, incapable of decoding clean LibriSpeech data, is also unaffected by our targeted perturbation. It therefore seems that, while multilingual models are not robust to our examples, a minimal performance on the original language is required to observe transferability.
The final 4 models for which the targeted transferability rate is null or close to null, are those that were not SSL-pretrained at all (including M-CTC which was pretrained with pseudo-labeling). These four models also partially resist the untargeted attack.
It emerges from these results that some recent ASR models, specifically those pretrained with SSL, can be vulnerable to transferred attacks. These results diverge significantly from previous works like (Abdullah et al., 2021b; 2022a) which showed no transferability between different models. Table 1 hints that SSL pretraining plays an important role in transferability, but does not prove it: to do so we would need to compare models of identical architecture and performance, pretrained and trained from scratch, both as proxy and target. This is what we do in the next section.s
4 IDENTIFYING THE FACTORS THAT ENABLE ATTACK TRANSFERABILITY
In this section, we conduct a thorough ablation study and establish rigorously that SSL pretraining makes ASR models vulnerable to transferred attacks. We also measure the influence of several other factors on transferability. This ablation study requires the generation of many sets of adversarial examples,, using varying models as proxy, which would be computationally difficult with the improved attack introduced in section 3.1. Since we do not seek optimal performance, throughout this section we run the base attack in Section 2.2 with 1000 optimization steps.
4.1 INFLUENCE OF SELF-SUPERVISED LEARNING
In this section, we compare Wav2Vec2 models with varying amounts of pretraining data: 60k hours, 960h, or none at all. We use each model both as a proxy to generate adversarial noise and as a private model for evaluation with all other proxies.
As Wav2Vec2 models fine-tuned from scratch are not publicly available, we train our own models with no pretraining, using the Wav2Vec2 fine-tuning configurations on 960h of labeled data available in Fairseq (Ott et al., 2019). These configurations are likely suboptimal and our models achieve test-clean WERs of 9.1% (Large) and 11.3% (Base), much higher than the pretrained+fine-tuned Wav2Vec2 models. This performance discrepancy could affect the fairness of our comparison. We therefore add to our experiments Wav2Vec2 Base models fine-tuned on 1h and 10h of labeled data only. These models achieve test-clean WERs of 24.5% and 11.1%. Therefore we can observe the influence of SSL pretraining by taking model architecture and performance out of the equation.
Our attacks are not as strong as in section 3.1, and only have limited effect on the targeted WER. Therefore we evaluate results at the character level, which offers much finer granularity. For reference, we observe that the CER between two random pairs of sentences in LibriSpeech is 80-85% on average. Therefore attack success rates higher than 20% (i.e CER < 80% with the target) indicate a partially successful attack. We report those results in Table 2. Results in italic correspond to cases where attacked model is the proxy or was fine-tuned from the same pretrained representation, and therefore do not correspond to a transferred attack.
These results show unambiguously that SSL pretraining plays a huge role in the transferability of adversarial attacks. Adversarial examples generated on the pretrained Wav2Vec2 models fine-tuned on 960h are partially successful on all pretrained models (success rate in the 25-46% range). They are however ineffective on the ASR models trained from scratch (4-8%). Similarly, models trained from scratch are bad proxies for pretrained models (2-3%) and even for each other (19-22%).
It follows that SSL pretraining is a necessary condition for transferable adversarial examples in both the proxy and the private model. We confirm it by plotting in Figure 3a the evolution of the target loss while generating one adversarial example. We display the loss for the proxy model (blue) and two private models. The loss of the pretrained private model (red) converges to a much lower value than the non-pretrained model (yellow).
SSL pretraining is however not a sufficient condition for attack transferability, and other factors play a role as well. For instance, the Base model fine-tuned on just 10h and 1h are ineffective proxies: so strong ASR models are likely better proxies than weaker ones.
4.2 INFLUENCE OF PRETRAINING DATA
As observed in Section 3 models that were (pre)trained on different data than the proxies can still be affected by transferred attacks. We analyse this effect in more details in this section. We focus on five Wav2Vec2-Large models. One is pretrained and fine-tuned on LibriSpeech. One is pre-
trained on LibriLight and fine-tuned on LibriSpeech. Two are pretrained on LV60k, CommonVoice, SwitchBoard and Fisher, and fine-tuned respectively on LibriSpeech and SwitchBoard. Finally one is pretrained and finetuned on CommonVoice (English-only). As in the previous section, we evaluate every combination of proxy and target models.
We report the results in Table 3. We observe that most pairs of proxy and private models lead to important partial transferability. The major exception is the CommonVoice-only model, which does not succeed as a proxy for other models (0-8% success rate). In contrast, it is vulnerable to attacks transferred from other models, including those that do not have CommonVoice in their training data. We also note that models pretrained on Lbri-Light or more (60+khs) are better proxies, and more vulnerable ot attacks, than the LibriSpeech-only and CommonVoice-only model. In other words the vulnerability that we point out is worsened rather than mitigated by increasing amounts of available data.
4.3 MODEL SIZE AND TRAINING HYPERPARAMETERS
We now extend our ablation study to models pretrained with different SSL paradigms. We report the results in Table 4. We observe that adversarial examples also transfer between models trained with different paradigms. Moreover, At equal pretraining data all models are not equal proxies, and the HuBERT Large model (pretrained on 60kh) is the best proxy by a large margin.
5 A HYPOTHESIS FOR THE VULNERABILITY OF SSL-PRETRAINED MODELS
We have established a link between adversarial transferability and the SSL pretaining of ASR models. In this section we propose a hypothesis explaining that link. We first show in Section 5.1, with empirical justification, that attacks with a very precise target are much harder to transfer everything
else being equal, explaining why targeted ASR attacks are usually nontransferable. Then in Section 5.2 we suggest ways in which SSL alleviates these difficulties, thus recovering some transferability.
5.1 AT EQUAL WHITE-BOX SUCCESS, VERY TARGETED ATTACKS ARE HARDER TO TRANSFER
Targeted attacks on CIFAR10 force the model to predict one out of 10 different labels. Targeted attacks on ASR models force the model to transcribe one of all the possible transcriptions: With sequences of just five English words the number of possibilities is equal to 1700005 ∼ 1026. We can call such an attack ”very targeted”, by contrast to more ”mildly targeted” attacks on CIFAR10.
We hypothesize that the target precision, or ”how targeted” the attack is, negatively affects its transferability success rate, explaining why targeted ASR attacks do not transfer easily. To demonstrate it empirically, we can imagine an experiment where an attacker tries to run a very targeted attack on CIFAR10. We hypothesize that in such a case, the transferred attack success rate would drop even if the white box attack success rate remains high. Inversely, if we designed a ”mildly targeted” attack on ASR models, we would expect it to achieve a non-trivial transferability success rate. We designed experiments for both cases, which we summarize below. Complete experimental details and results are provided in Appendix B.
5.1.1 VERY TARGETED ATTACKS ON CIFAR10
We run an attack on a ResNet CIFAR10 model. We do not just enforce the model’s most probable output (top1 prediction) but the first k most probable outputs (topk prediction). For example with k = 3, given an image of an airplane, the attack objective could be to modify the image such that the most probable model output is ”car”, the second most probable is ”bird” and the third is ”frog”. Our attack algorithm sets a ”target distribution” of classes, then minimizes the KL divergence of the model’s probabilistic outputs and the target, using Projected Gradient Descent. The success rate is evaluated by matching the top k predictions and the top k targets.
We compute the L∞ attack success rate (ϵ = 0.03) for both white-box and transferred attacks as a function of the ”target precision” k. For k = 1, we measure a transferability success rate above 30%. However, as k increases, the transferability success rate drops close to 10%, which is the success threshold that a random model would achieve. In other words, the transferability becomes null as k increases. Meanwhile, the white box attack success rate remains above 95%. Therefore very targeted attacks on images do not transfer.
5.1.2 MILDLY TARGETED ATTACKS ON ASR
We train five small Conformer models on LibriSpeech. On each of them we generate targeted adversarial examples. The target objective is simply to prepend the word ”But” to the original transcription. This makes for a much less targeted attack as is traditionally done with ASR. The attack success rate is evaluated simply by checking the presence of the word ”But” at the beginning of the prediction. We restrict evaluation to inputs whose transcription does not start with that word.
For each model, we generate 100 adversarial examples and evaluate them on all 4 other models. We thus obtain 20 different transferability success rates. The average of these scores is 18% with a standard deviation of 4.7%. Therefore mildly targeted attacks on ASR transfer substantially better than regular, very targeted attacks. Equivalent experiments with very targeted ASR attacks are reported in Abdullah et al. (2021b): the word-level transferability success rate is 0%.
5.2 VERY TARGETED TRANSFERABILITY REQUIRES IMPORTANT FEATURE OVERLAP
Why would very targeted attacks transfer less? As Ilyas et al. (2019) show, statistically meaningful patterns in the training data may be ”robust” (i.e. resilient to small perturbations) or non-robust. By leveraging non-robust features attackers can generate adversarial perturbations - and as these features can be learned by any models, these perturbations will transfer. The underlying assumption behind this framework is that all models learn the same features. In practice, two seperate models do not learn identical features due to randomness in training. But if they are ”close enough”, i.e. if the feature overlap between both models is important, then transferability will be observed.
It therefore makes perfect sense that more targeted attacks would transfer less. The more precise and difficult the attack objective is, the more features the attacker will depend on to achieve it. This increases the amount of feature overlap needed between proxy and private model for the attack to transfer. In the case of targeted ASR attacks, the required overlap is considerable. We hypothesize that SSL pretraining increases the feature overlap between ASR models. As empirically verifying it would pose important difficulties, we propose a high-level justification of that hypothesis.
ASR training aims at learning a representation that enables speech transcription. A subset of all features is sufficient to achieve this objective: for instance, there are lots of redundancies between low-frequency and high-frequency features, and a human listener can easily transcribe speech where most frequencies have been filtered out. The set of features learned by ASR models is therefore underspecified: two models even very similar identically may learn representations with little overlap.
Self-Supervised Learning on the other hand does not only learn useful features for transcription but features needed for predicting the input itself : parts of the input are masked, then they (or their quantized or clusterized form) are predicted using context. Arguably this much more ambitious objective requires the network to learn as many features as possible. In fact, the goal of such pretraining is to learn useful representations not just for ASR but any downstream task - i.e. ”exhaustive” representations. Intuitively, different models trained in that way would share many more features than ASR models trained from scratch - leading to more transferable adversarial examples.
6 RELATED WORK
The transferability of adversarial attacks has been known for many years in Image Classification (Papernot et al., 2016). On ASR it has been limited to simple attack objectives, like preventing WakeWord detection in Alexa (Li et al., 2019) or signal processing-based attacks (Abdullah et al., 2021a; 2022b). When it comes to optimization-based attacks on large ASR models, transferability claims are usually limited and focus on untargeted attacks (Wu et al., 2022). In very specific cases there have been limited claims of targeted, transferable attacks, such as Yuan et al. (2018); however, this work does not focus on imperceptible attacks with small amounts of noise, but rather attacks embedded in music. When it comes to standard targeted optimization attacks, Abdullah et al. (2021b) have shown that they display no transferability on DeepSpeech2 models, even when the proxy and the attacked model are trained with identical hyperparameters apart from the initial random seed.
Past ASR adversarial attacks usually focus on a handful of neural architectures, typically DeepSpeech2 (et al., 2016), sometimes Listen Attend and Spell (Chan et al., 2016). Only recently have attacks been extended to multiple recent architectures for a fair comparison between models (Lu et al., 2021; Olivier & Raj, 2022; Wu et al., 2022). Most related to this work is Wu et al. (2022), which focuses on the vulnerability of SSL speech models. They however focus on attacking the base pretrained model with untargeted noise that remains effective on downstream tasks. We study targeted attacks, with a much deeper focus on transferability between different models. Olivier & Raj (2022) have hinted that Wav2Vec2 models are vulnerable to transferred attacks, but only report limited results on two models and do not investigate the cause of that phenomenon. We attribute it to SSL pretraining and back our claims empirically.
Abdullah et al. (2022a) have identified factors that hinder transferability for ASR attacks, such as MFCC features, Recurrent Neural Networks, and large output sizes. Since Wav2Vec2 is a CNNTransformer model with character outputs: this gives it a better prior than DeepSpeech2 to achieve transferable adversarial attacks. However, according to that paper, this should be far from sufficient to obtain transferable attacks: our results differ in the case of SSL-pretrained models.
7 CONCLUSION
We have shown that ASR targeted attacks are transferable between SSL-pretrained ASR models. Direct access to their weights is no longer required to fool models to predict outputs of the attacker’s choice - and to an extent, knowledge of its training data is not required either. With that in mind, and given the existence of over-the-air attack algorithms, we expect attacks against ASR models to become a practical, realistic threat as soon as Wav2Vec2-type models are deployed in production.
In that context, it is paramount to develop adversarial defense mechanisms for ASR models. Fortunately, such defenses already exist, but they come at the cost of a tradeoff in model performance. We illustrate it in appendix E. Further research should be carried out into mitigating that tradeoff and adapting to ASR the most effective defenses in image classification, such as adversarial training.
A EXPERIMENTAL DETAILS FOR LIBRISPEECH EXPERIMENTS
A.1 FRAMEWORKS
We compute adversarial examples using the robust speech framework (Olivier & Raj, 2022). This library uses Speechbrain (Ravanelli et al., 2021) to load and train ASR models and offers implementations of various adversarial attack algorithms. Models and attacks are implemented using PyTorch (Paszke et al., 2019).
We use robust speech for evaluation on SpeechBrain-supported models. In section 3 we export a HuggingFace Dataset (Lhoest et al., 2021), then evaluate models via the HuggingFace Transformers (et al., 2020) library. Finally, we use Fairseq (Ott et al., 2019) for training models from scratch
All of our robust speech and Fairseq configurations are released alongside this article.
A.2 ATTACK HYPERPARAMETERS
We exploit the Carlini&Wagner attack (see section 2.2) implemented in robust speech, with the following hyperparameters:
• initial ϵ: 0.015 (and 0.04 in appendix D.1)
• learning rate: 0005
• number of decreasing ϵ values: 1
• Regularization constant c: 10
• optimizer: SGD
• attack iterations: 10000 in section 3.1, 1000 in section 4
A.3 DATASET AND TARGETS
Our adversarial dataset in section 3.1 consists of 85 sentences from the LibriSpeech test-clean set. To extract these sentences we take the first 200 sentences in the manifest, then keep only those shorter than 7 seconds. In section 4, we take the first 100 sentences and filter those shorter than 14 seconds.
As attack targets, we use actual LibriSpeech sentences sampled from the test-other set. Our candidate targets are:
• Let me see how can i begin
• Now go I can’t keep my eyes open
• So you are not a grave digger then
• He had hardly the strength to stammer
• What can this mean she said to herself
• Not years for she’s only five and twenty
• What does not a man undergo for the sake of a cure
• It is easy enough with the child you will carry her out
• Poor little man said the lady you miss your mother don’t you
• At last the little lieutenant could bear the anxiety no longer
• Take the meat of one large crab scraping out all of the fat from the shell
• Tis a strange change and I am very sorry for it but I’ll swear I know not how to help it
• The bourgeois did not care much about being buried in the Vaugirard it hinted at poverty pere Lachaise if you please
For each sentence we attack, we assign the candidate target with the closest length to the sentence’s original target.
A.4 MODELS
A.4.1 TRAINING WAV2VEC2 MODELS FROM SCRATCH
We use Fairseq to train Base and Large Wav2Vec2 models from scratch. Unfortunately, no configuration or pretrained weights have been released for that purpose, and we resort to using Wav2Vec2 fine-tuning configurations while simply skipping the pretraining step. Despite our attempts to tune training hyperparameters, we do not match the expected performance of a Wav2Vec2 model trained from scratch: (Baevski et al., 2020) report a WER of 3.0% for a large model, while we only get 9.1%.
A.4.2 GENERATING ADVERSARIAL EXAMPLES
Wav2Vec2, HuBERT and Data2Vec models are all supported directly in robust speech and are therefore those we use for generating adversarial examples. We use the HuggingFace backend of Speechbrain for most pretrained models, and its Fairseq backend for a few (Wav2Vec2-Base models finetuned on 10h and 1h, and models trained from scratch). In both cases, the model’s original tokenizer cannot be loaded in SpeechBrain directly. Therefore, we fine-tune the final projection layer of each model on 1h of LibriSpeech train-clean data.
The Wav2Vec2 model pretrained and fine-tuned on CommonVoice is a SpeechBrain original model. Similarly, we fine-tune it on 1h of LibriSpeech data as a shift from the CommonVoice output space to the LibriSpeech one. As a result, all our models share the same character output space.
A.4.3 EVALUATING PRETRAINED MODELS
In section 3, we directly evaluate models from HuggingFace Transformers and SpeechBrain on our adversarial dataset, without modification.
B EXPERIMENTAL DETAILS AND RESULTS FOR SMALL-SCALE EXPERIMENTS
This section describes the experimental details used in section 5.
B.1 CIFAR10 EXPERIMENTS
We use a pretrained ResNet18 as proxy, and a pretrained ResNet50 as private model.
Our ”very targeted attack” PGDk consists in applying the following steps for each input:
• target selection. We sample uniformly an ordered subset of k classes out of 10 (E.g. with k = 3: (2, 5, 6)). We also sample a point uniformly on the unit k-simplex {x1, ..., xk ∈ [0, 1]n/ ∑ i Xi = 1}, by sampling from an exponential distribution and
normalizing (Onn & Weissman, 2011) (e.g. (0.17, 0.55, 0.28)). We combine the two to obtain a 10-dimensional vectors with zero probability on all but the selected k classes (y = (0, 0.17, 0, 0, 0.55, 0.28, 0, 0, 0, 0)). This is our target.
• During the attack, we use Projected Gradient Descent (Madry et al., 2018) to minimize the KL divergence KL(f(x), y) between the softmax output and the target, within L2 radius ϵ = 0.5. We use learning rate 0.1 for k ∗ 1000 attack steps.
• We measure attack success rate by measuring the top-k match between f(x) and y:
acc = 1
k k∑ i=1 1[argsort(f(x))i = argsort(y)i
with argsort(y) returning the indices of the sorted elements of y in decreasing order. For instance f(x) = (0.1, 0.05, 0.05, 0.05, 0.35, 0.2, 0.05, 0.05, 0.05, 0.05) would get an accuracy of 0.666, as the top 2 classes match with y but not the third.
We evaluate attacks on 256 random images from the CIFAR10 dataset. For each value of k between 1 and 10 we repeat the experiment 3 times and average the attack success rates. In figure 2 we plot
the L∞ attack success rate (ϵ = 0.03) for both white-box and transferred attacks as a function of the ”target precision” k.
B.2 MILDLY TARGETED ASR ATTACKS
We train 5 identical conformer encoder models with 8 encoder layers, 4 attention heads, and hidden dimension 144. We train them with CTC loss for 30 epochs on the LibriSpeech train-clean-100 set, with different random seeds.
We run a L2-PGD attack with SNR bound 30dB, in which we minimize the cross-entropy loss between the utterance and its transcription prepended with the word ”But”. The utterances we attack are the first 100 sentences in the LibriSpeech test-clean set, to which we remove 7 sentences already starting with the word ”But”. We generate adversarial examples using each of the 5 models as proxy, and evaluate these examples on all 5 models. We report the full results in Table 5.
C FULL RESULTS TABLE FOR CROSS-MODEL ATTACKS
Table 6 completes the ablation study in Section 4 by evaluating all pairwise Proxy-Model combinations in our pool of Wav2Vec2-type models.
D INFLUENCE OF HYPERPARAMETERS ON ATTACK RESULTS
D.1 ATTACK RADIUS
In Table 7 we extend the results of Table 1 by comparing attack results for two different attack radii. These radii are ϵ = 0.015 and ϵ = 0.04, corresponding respectively to Signal-Noise Ratios of
30dB and 22dB respectively. The former is identical to Table 7; the latter is substantially larger, and corresponds to a more easily perceptible noise.
Looking at the white-box attack results on the proxy models the difference is drastic: with larger noise the targeted success rate jumps from 88% to 98%. The transferred attack results on SSLpretrained models also increase overall, with success increases ranging from 0% (Wav2Vec2-Large) to 20% (Data2Vec-Large) with a median increase of 10%. Crucially however, the targeted success does not increase at all and even decreases for ASR models trained from scratch. This confirms that there is a structural difference between the robustness of ASR models with and without SSL, that cannot be bridged simply by increasing the attack strength.
D.2 LANGUAGE MODELS
In section 3 we report the results of our adversarial dataset on multiple Wav2Vec2-type models, enhanced with an N-gram language model whenever available. In Table 8 we evaluate the influence of that language model on attack results.
We observe that the attack success rate systematically increases by 8 to 17% when adding a language model to the ASR model. This is understandable considering that our targets are sound English sentences: if a model tends to transcribe that target with mistakes, the language model can bridge that
gap. To put it differently, the more prone an ASR model is to output sentences in a given distribution, the more vulnerable it is to attacks with targets sampled from that distribution. Language models are therefore more of a liability than a defense against attacks, and most likely so would be many tricks applied to an ASR model in order to improve its general performance.
D.3 EFFECT OF MODEL REGULARIZATION ON TRANSFERABILITY
As mentioned in Section 2.2 we use regularization tricks like dropout in all proxy models when optimizing the adversarial perturbation. In Figure 3b we plot the loss on proxy and private models without that regularization, for comparison with Figure 3a. We observe that the loss degrades significantly on private models without regularization.
On the other hand, the loss on the proxy converges much faster in Figure 3b: removing model regularization makes for better, faster white-box attacks, at the cost of all transferability. To the extent of our knowledge, past work like Carlini & Wagner (2018) have not used regularization for generation, explaining why they report better white-box attacks than we do in terms of WER and SNR. However, as we have established above, applying regularization against standard ASR models does not lead to transferable adversarial examples: for that SSL pretraining is also required.
E DEFENDING AGAINST ADVERSARIAL EXAMPLES
Although we have shown that adversarial attacks can represent an important threat for private, SSLbased ASR models, it is possible to defend against them. Randomized smoothing Cohen et al. (2019) is a popular adversarial defense that has been applied to ASR in the past Olivier & Raj (2021) and comes with some robustness guarantees. It consists in applying to the inputs, before feeding them to the model, amounts of random gaussian noise that are significantly larger than potential adversarial perturbations in L2 norm. For reference we try applying it on some of our models.
We follow (Olivier & Raj, 2021) and enhance randomized smoothing with a-priori SNR estimation and ROVER voting (with 8 outputs) to boost performance. We use gaussian deviation σ = 0.02. For evaluation, we simply check the effect of our adversarial examples generated in section 3.1 on the smoothed model. A rigorous evaluation would require us to design adaptive attacks Athalye et al. (2018); Tramer et al. (2020); since this paper does not focus on claiming robustness to attacks, we restrict ourselves to a simpler setting.
We report our results in Table 9 for the Wav2Vec2-Base, Wav2Vec2-Large and Data2Vec-Large models, pretrained and fine-tuned on 960h of LibriSpeech training data. We observe that randomized smoothing is sufficient to block the targeted attack completely (0% success rate) and recover most of the original transcription (the untargeted success rate drops to 14-34% depending on the model). However, due to the addition of gaussian noise on all inputs the defense takes a toll on the performance on clean data: the WER jumps by 4-10%. The standard deviation σ controls this tradeoff between robustness and performance; we chose the value of σ that minimizes the untargeted success rate.
Unsurprisingly, randomized smoothing is a promising protection against transferred attacks, but it does leave room for improvement. These results illustrate the need for additional research on adversarial defenses. | 1. What is the focus of the paper regarding recent self-supervised ASR models?
2. What are the strengths of the proposed approach, particularly in terms of its ability to conduct an ablation study and provide explanations for the observed phenomenon?
3. What are the weaknesses of the paper, especially regarding experiment settings and inconsistencies?
4. Do you have any concerns or questions about the experiments conducted in the paper, such as training an attack model to fool a non-SSL based model or the motivation behind using different versions of the model?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper shows that recent self-supervised ASR model are uniquely vulnerable to black-box adversarial attacks, which is an interesting observation and can inspire future research in this field.
Strengths And Weaknesses
Strength:
The paper shows that recent self-supervised ASR model are uniquely vulnerable to black-box adversarial attacks.
The paper conducts an ablation study to show that self-Supervised learning is the main cause of that phenomenon.
The paper provides an explanation for this phenomenon.
Weakness:
The experiment settings in this paper are inconsistent and confusing. I summarize it with the following question. Although I rate a relative low score for this paper, I am willing to change my score if the issues are addressed.
In Section 3.4, the results show that SSL-based model is vulnerable to black-box adversarial attacks. However, the attack model is also trained to fool a SSL based models. What if the attack model is trained to fool a non-SSL based model such as the bottom three models in Table 1?
The experiments in Section 3 use an improved version of that mention in Section 2.2. However, in Section 4 only the default version of Section 2.2 is utilized. What is the motivation for this inconsistency?
(A question similar to the first question) In Section 4.1, it seems that the attack model is also trained to fool a wav2vec2-based model. In this way, it is natural that the wav2vec2-based model is more likely to be fooled compared with the non-wav2vec2 model.
In Section 5.1 and 5.2, the conclusion is that a hard targeted attack is hard to achieve compared with a mildly targeted attack, which is an interesting but natural result. However, the intuition of why this result can explain the vulnerable SSL-based ASR model is not discussed in detail.
Clarity, Quality, Novelty And Reproducibility
The phenomenon reported in this paper is interesting. |
ICLR | Title
When Majorities Prevent Learning: Eliminating Bias to Improve Worst-group and Out-of-distribution Generalization
Abstract
Modern neural networks trained on large datasets achieve state-of-the-art (indistribution) generalization performance on various tasks. However, their good generalization performance has been shown to be contributed largely to overfitting spurious biases in large datasets. This is evident by the poor generalization performance of such models on minorities and out-of-distribution data. To alleviate this issue, subsampling the majority groups has been shown to be very effective. However, it is not clear how to find the subgroups (e.g. within a class) in large real-world datasets. Besides, naively subsampling the majority groups can entirely deplete some of their smaller sub-populations and drastically harm the in-distribution performance. Here, we show that tracking gradient trajectories of examples in initial epochs allows for finding large subpopulations of data points. We leverage this observation and propose an importance sampling method that is biased towards selecting smaller subpopulations, and eliminates bias in the large subpopulations. Our experiments confirm the effectiveness of our approach in eliminating spurious biases and learning higher-quality models with superior inand out-of-distribution performance on various datasets.
1 INTRODUCTION
Large datasets have enabled modern neural networks to achieve unprecedented success on various tasks. Large datasets are, however, often heavily biased towards the data-rich head of the distribution (Le Bras et al., 2020; Sagawa et al., 2020; 2019). That means, there are large groups of potentially redundant data points belonging to majority subpopulations, and smaller groups of examples representing minorities. Larger groups often contain spurious biases, i.e., unintended but strong correlations between examples (e.g. image background) and their label. In such settings, overparameterized models learn to memorize the spurious features instead of the core features for the majority, and overfit the minorities (Sagawa et al., 2020). As a result, despite their superior performance on in-distribution data, overparameterized models trained on biased datasets often have a poor worst-group and out-of-distribution generalization performance.
To improve the high worst-group error and of out-of-distribution generalization, techniques such as distributionally robust optimization (DRO), or up-weighting the minority groups are commonly used (Sagawa et al., 2019; 2020). However, such methods have been shown to be highly ineffective for overparameterized models in the presence of spurious features (Sagawa et al., 2020). When majority groups are sufficiently large and the spurious features are strong, overparameterized models choose to exploit the spurious features for the majorities and memorize the minorities, as it entails less memorization on the entire data. In this setting, upweighting minorities only exacerbates spurious correlations, and subsampling the majorities has been advocated for (Sagawa et al., 2020). But, this requires the groups to be specified beforehand, which is not available for real-world datasets. Besides, random subsampling of the majority groups can entirely deplete some of their subpopulations and drastically harm the in-distribution performance (Toneva et al., 2018; Paul et al., 2021).
In this work, we propose an effective way to find large subpopulations of examples (see Fig. 1), and subsample them to ensure inclusion of representative examples from all the subpopulations. We rely on the following recent observations. In the initial training epochs, the network learns important
features and the NTK undergoes rapid changes, which determine its final basin of convergence (Fort et al., 2020). This results in learning a linear function during the initial epochs, followed by learning functions of increasing complexity (Nakkiran et al., 2019). We show that large subpopulations are responsible for forming the initial linear model, by inserting large gradient forces in the first few epochs. The minorities, on the other hand, dictate the higher-complexity functions later in training. To find the large subpopulations, we track the gradient trajectories—the way the gradient changes— during initial training epochs. Then, we cluster similar gradient trajectories together, and employ an importance sampling method that samples data points from every cluster by a probability equal to the inverse of the size of the cluster it belongs to. This allows selecting a balanced subset from different clusters. By studying the effect of our method on the evolution of the model early during the training, we show that our method allows the model to better learn from all the subpopulations by balancing the gradient forces between different groups. This enables learning higher-quality features.
Our empirical studies confirm the effectiveness of our method in improving the worst-group and out-of-distribution generalization, while enjoying a superior in-distribution performance even when the size of the selected sample is small. Notably, on CMNIST (Alain et al., 2015) and Waterbird (Sagawa et al., 2019) datasets which contain strong spurious biases, our method achieves a comparable or even better performance than the state-of-the-art methods, which rely on the underlying group information to uniformly subsample the majority group. In addition, on CIFAR10, CIFAR100 (Krizhevsky et al., 2009), and Caltech256 (Griffin et al., 2007) our method provides a superior indistribution performance to state-of-the-art data pruning methods, based on forgettability (Toneva et al., 2018) and El2N (Paul et al., 2021) scores, especially for small subsets. At the same time, it outperforms such methods on out-of-distribution data, CIFAR10C (Hendrycks & Dietterich, 2019).
2 RELATED WORK
Data pruning for worst-group generalization. To improve the generalization performance on minorities, preventing the model from learning spurious features is very helpful (Sagawa et al., 2019; 2020). For overparameterized models, randomly subsampling the majorities has been shown to be the most effective (Sagawa et al., 2020) than distributionally robust optimization (DRO) (Sagawa et al., 2019) and up-weighting the minority groups (Sagawa et al., 2020). However, this requires the group labels to be specified beforehand, which is not available for large real-world datasets. Besides, if the majority contains imbalanced subpopulations, random subsampling inherits similar biases. Finally, random subsampling of the majority groups can entirely deplete some of their smaller subpopulations and drastically harm the in-distribution performance, as we empirically show.
A different line of work (Sohoni et al., 2020; Nam et al., 2020; Ahmed et al., 2020; Liu et al., 2021; Creager et al., 2021; Taghanaki et al., 2021; Zhang et al., 2022; Nam et al., 2021) studies how to improve worst-group generalization without having access to group labels. These methods require training a model first to minimize the average empirical risk before training the robust model, which doubles the training time and is thus also not practical for large real-world datasets.
Data pruning for OOD generalization. Spurious features have been shown to also harm the outof-distribution generalization (Le Bras et al., 2020). To alleviate this, Swayamdipta et al. (2020) proposed to train on the subset of most ambiguous instances whose true class probabilities fluctuate frequently during training, and Le Bras et al. (2020) employed AFLite (Le Bras et al., 2020) to iteratively filter highly-predictable examples by training multiple linear classifiers on different random partitions of the data. However, such methods drastically harm the in-distribution performance.
Data pruning for in-distribution generalization. The main idea behind all data pruning methods is to define a notion of example difficulty, and prune the easy-to-learn examples. Notably, Coleman et al. (2020) used a smaller trained proxy model to find the most uncertain examples to train a larger model. Toneva et al. (2018) defined a forgetting event of an example as transitioning from being classified correctly to incorrectly during training, and drop the examples with no forgetting events. Most recently, Paul et al. (2021) dropped examples with the lowest average errors (EL2N) recorded early in training and averaged over several initializations. The above heuristics require full or partial training of multiple models, can only drop a relatively small fraction of the examples, and hurt the out-of-distribution performance, as we show in our experiments. In contrast, our method can successfully alleviate bias and achieve a superior in- and out-of-distribution performance.
3 PROBLEM FORMULATION
Training machine learning models is often reduced to minimizing an empirical risk function (ERM). That is, the goal is to find the parameter w∗ that minimizes the average error on the entire training data D = (X,y) = {(xi, yi)}i∈V , where V = {1, · · · , n} indexes the training data. Formally,
w∗ = argminw∈WL(w), L(w) = E(xi,yi)∈D[l(f(w,xi), yi))], (1)
wherew is the model parameter, and f(w,xi) and l(f(w,xi), yi)) are the output of the network and the value of the loss associated to a training example (xi, yi), respectively. For large datasets, the average error L is minimized by applying (Stochastic) Gradient Descent with learning rate η starting from a random initial point w0:
wt+1 = wt − η∇L(wt), ∇L(wt) = J (wt,X )T∇f l(f(wt,X ), y), (2)
where y = {yi}ni=1, X = {xi}ni=1, and J (w,X ) ∈ Rn×m is the Jacobian matrix associated with the nonlinear network f : Rd → Ro defined as
J (w,X ) = [∂f(w,x1)
∂w · · · ∂f(w,xn) ∂w
]T , (3)
and ∇f l(f(wt,XS), yS) is the gradient of the loss w.r.t. the network. Furthermore, Θt(X,X ) = J (wt,X )J (wt,X )T is the empirical neural tangent kernel (NTK) (Jacot et al., 2018; Du et al., 2018), describing the evolution of the network during training by gradient descent.
Spurious features and majority groups. We consider a similar setting with (Sagawa et al., 2019; 2020), where each training example (xi, yi) is associated with a spurious attribute ai that is correlated with its label yi. The examples with the same spurious attribute and label make a group gj,k ∈ G, where gj,k = {(xi, yi)|i ∈ V, ai = j, yi = k}. The groups which contain considerably more examples than the rest are referred to as majority groups. For example, in the Waterbirds dataset (Sagawa et al., 2019), every example (xi, yi) belongs to one of the 2 classes, yi ∈ {waterbird, landbird} and the image background ai ∈ {water background, land background} is spuriously correlated with the label yi. Thus, there are four groups of examples associated with every combination of spurious attribute and label, i.e., G ={(waterbird, water background), (waterbird, land background), (landbird, water background), (landbird, land background)}. The majority groups are (waterbird, water background), and (landbird, land background). Importantly, in this work, we assume that the groups and spurious attributes are not known at training time.
Subpopulations. Every dataset can be partitioned into s different subpopulations of examples that are similar in terms of their effect on training, i.e. the indices of the training data V can be partitioned into V = {V1, · · · , Vs}. For a formal definition, see Section 4.2. Note that subpopulations may represent a finer clustering compared to group clustering. Fig. 1 shows an illustration of groups vs. subpopulations for Waterbird dataset. We develop a method that automatically clusters the data and identifies large subpopulations.
Objective. Our goal is to find a subset S ⊆ V of size r = |S| from all training examples indexed by V , such that training on the subset alleviates the effect of spurious biases and improves (1) the worst-group generalization when the groups are imbalanced, or (2) out-of-distribution generalization under distribution shift. In both cases, we aim to preserve a good performance on the in-distribution data. In particular, the worst-group error is defined as,
Errwg = max g∈G Exi,yi|g[yi ̸= yf (w,xi)], (4)
where yf (w, (xi, yi) is the label predicted by the model. In other words, Errwg measures the highest fraction of examples that are incorrectly classified across all groups. Similarly, the out-of distribution (OOD) performance measures the performance of the model f trained on the training set D, and tested on D′ = (c(X ), y), when c is from a set of shifting functions C. Formally,
Errood = E(xi,yi)∈D′ [yi ̸= yf (w,xi)], (5)
measures the fraction of examples that are misclassified when (xi, yi) is drawn i.i.d. from D′.
4 ELIMINATING BIAS IN THE DATA
In this section, we present our main results. We start by discussing the effect of large subpopulations on early learning dynamics. Then, we explain how gradient trajectories of examples during the initial training epochs allow finding the large subpopulations. Next, we employ importance sampling to find a subset that contains a similar number of examples from subpopulations. Finally, we study how the subset found by our method affects the network’s early learning dynamics.
4.1 EFFECT OF LARGE SUBPOPULATIONS ON EARLY LEARNING DYNAMICS
Recent empirical studies on neural networks’ training dynamics show that in the initial epochs the performance of a network trained by SGD can be explained by a linear classifier. Formally, if F and L are the corresponding random variables for the neural network and a linear model respectively, the mutual information between F and y conditioned on L, I(F ;y|L), captures the part of F ’s success on the prediction of y in addition to L. Then the performance correlation between F and L, µy(F ;L) := I(F ;y) − I(F ;y|L), is the part of F ’s success on the prediction of y that can be explained by L. Nakkiran et al. (2019) show that there exists T0 such that µy(Ft;L) ≈ I(Ft;y) at training step t for all t < T0. As training progresses, the network learns functions of increasing complexity (Nakkiran et al., 2019). Furthermore, Fort et al. (2020) show that during the first few epochs of training, neural network experience a rapid initial transient which determines the final basin of convergence. During this period, the NTK changes very rapidly and learns useful features.
First, we empirically show that large subpopulations are responsible for forming the initial linear model in the first few training epochs. Effectively, during training every example contributes to minimizing the loss by its gradient. Examples with similar gradients insert a similar force on the model and affect the model in the same direction. In the first few epochs, large gradient forces of large subpopulations highly bias the initial linear function. As the gradient forces of large subpopulations persist during training, the initially learned linear function is retained, to the point of zero training error (Nakkiran et al., 2019). As a result, large subpopulations dictate the rapid initial change of the NTK and the prominent features learned in this phase. On the other hand, smaller subpopulations have a smaller influence on the model, and require a larger number of training iterations to be learned by the higher-complexity functions that are shaped later during the training.
When the spurious bias is strong, the initial linear function is dictated mainly by the spurious feature, and persists during training. This provides a good training and generalization error on the large groups, and thus prevents learning their core features . On the other hand, on the small subpopulations, functions with much higher complexity overfit and memorize the minorities. Such functions result in a small training error but a poor worst-group generalization performance on the minorities. This further explains the observation by Sagawa et al. (2020) showing that overparameterized models memorize the spurious feature and overfit the minorities. Effectively, the spurious features in large subpopulations prevent the model from learning the core features from the data.
Fig. 2a shows that the network’s prediction early in training can be well explained by a linear classifier (red). Besides, the network trained on large subpopulations can be well explained by the same
linear classifier (purple). This confirms that early training dynamics are dictated by large subpopulations. We also see that the behavior of the network trained on the subset selected by our method (discussed in Section 4.3) cannot be explained by the same linear model (brown). Fig. 2b shows that the linear model explaining the network trained on full data (blue) is similar to the linear model explaining the network trained on large subpopulations (orange), but different than the one explaining the network trained on our chosen subset (green). We see that the performance of the network can be well explained by a linear mode, and the linear model fitted on large subpopulations closely matches the one fitted on the entire data. These results further confirm that the linear classifiers fitted to entire data and large subpopulations are effectively the same during the initial training epochs.
4.2 FINDING THE LARGE SUBPOPULATIONS IN EVERY CLASS
The first question we aim to answer is how to find the large subpopulations of the data, without having such labels. As discussed, larger subpopulations insert a large gradient force on the model, and are learned during the initial epochs. When an example is learned, its gradient becomes nearly zero. Hence, every example has a gradient trajectory interpolating between its gradient at initialization and zero. Subpopulations that affect the model similarly have a similar gradient trajectory during training. Therefore, large subpopulations with similar gradient trajectories can be identified based on their gradient trajectory during the first few epochs.
To find the large subpopulations, we cluster the gradient trajectories during the initial epochs of training. As gradients are very high-dimensional, we first reduce the gradient dimensionality to better find the clusters. To do so, we rely on the following observation: for neural networks, the variation of the gradient norms is mostly captured by the gradient of the loss w.r.t. the input to the last layer of the network (Katharopoulos & Fleuret, 2018). The above lower-dimensional gradients can be efficiently computed in a closed form, and has been used as a gradient proxy in several recent works (Mirzasoleiman et al., 2020; Paul et al., 2021; Pooladzandi et al., 2022). Formally, for every example i we build its gradient trajectory by concatenating the lower-dimensional gradients during the first t training epochs, i.e.,
∇0:tf l(xi, yi) = [∇f l(f(w0,xi), yi),∇f l(f(w1,xi), yi), · · · ,∇f l(f(wt,xi), yi)], (6)
where∇f l(f(wj ,xi), yi) is the gradient of the loss w.r.t. the input to the last layer of the network at epoch j for training example (xi, yi). Note that as the gradient of an example depends on its label, examples from different classes do not have a similar gradient. Hence, we find similar gradient trajectories from every class separately.
Next, we cluster gradient trajectories to find the large subpopulations in every class. While any clustering algorithm can be used, we use the k-medoids objective to find the clusters efficiently. In particular, for 0 < κ < 1, we partition a class indexed by Vc ⊆ V to kc = κ · |Vc| subpopulations, by first finding the set Sc of its kc most centrally located gradient trajectories (medoids) by solving:
S∗c ∈ argmax S⊆Vc, |S|≤kc F (S) s.t. F (S) := ∑ i∈Vc max j∈Sc (cnt− ∥∇0:tf l(xi, yi)−∇0:tf l(xj , yj)∥), (7)
Algorithm 1 Training without Bias Input: Model f , initial epoch number t, subset fraction κ Output: Model f trained without bias
1: Train the model f for t epochs fromw0 and save gradient trajectories∇0:tf l(xi, yi) for all i ∈ V 2: for every class Vc do 3: Sc ← ∅ 4: for i = 1, 2, · · · , κ · |Vc| do 5: j ∈ argmaxe∈V \ScF (e|Sc) 6: Sc = Sc ∪ {j} 7: for i ∈ |Sc| do 8: Vc,i = {j ∈ Vc|i = argmin∥∇lf (w,xi)−∇lf (w,xj)∥ 9: for j ∈ V do
10: wj = |Vc,i| s.t. j ∈ Vc,i 11: pi=u 1/wi i s.t. ui∈(0, 1) is a uniform random number 12: S = {r examples with the largest pi} 13: Train the model f from w0 on S
where cnt is a large constant. Then to find the subpopulations, we assign every example to the medoid j ∈ S with the most similar trajectory. This partitions examples in class Vc to kc subpopulations Vc = {Vc,1, · · ·, Vc,kc}, where Vc,j = {i∈Vc|j=argmins∈Sc∥∇ 0:t f l(xi, yi)−∇0:tf l(xr, ys)∥}.
The maximization problem (Eq. (7)) is NP-hard. However, since the k-medoids objective is monotone and submodular1, a near-optimal solution of size k can be found efficiently in O(|V | · k) time. For maximizing a monotone submodular function, the greedy algorithm provides a (1 − 1/e) approximation guarantee (Wolsey, 1982). The greedy algorithm starts with the empty set S0 = ∅, and at each iteration l, chooses an element e∈V such that Sl = Sl−1 ∪ {argmaxe∈V F (e|Sl−1)}.
4.3 BALANCING THE SUBPOPULATIONS
To alleviate the bias of the large subpopulations and enable effective learning of core features, we aim to prune the large gradient trajectory clusters formed in initial epochs. This prevents the initial linear model from being biased toward the large subpopulations. In doing so, we allow the initial linear model to capture the complexity in different subpopulations, and learn the core features instead of the spurious features of the majorities. Hence, the model obtains a better generalization performance on minorities and out-of-distribution data. However, this should be done carefully as over-pruning the large subpopulations prevents them from participating in forming the initial model. This drastically harms the in-distribution generalization performance of the model.
To address this, we employ an importance sampling method on the union of the subpopulations of all classes, to select every example by probability equal to the inverse of the size of the subpopulation it belongs to. In particular, we weigh every example i ∈ Vc,j by the size of the cluster j ∈ Sc it belongs to, i.e., wi = |Vc,j |. Then, we use the algorithm of Efraimidis & Spirakis (2006) to select a sample with probabilities equal to pi=1/wi, without replacement. The sampling procedure works as follows. For each example i in the dataset, we independently generate a uniform random number ui∈(0, 1) and calculate qi=u1/wii . Examples that possess the r largest qi form the final subset S. Our sampling method biases the sample selection towards the smaller subpopulations, and drops many examples from the larger subpopulations. However, it still preserves the patterns in larger subpopulations, by including a smaller number of their examples in the sample. Effectively, our method balances the gradient forces between different subpopulations. This increases the strength of the core gradient vs. the spurious gradient. In doing so, it allows different subpopulations to participate in forming the initial linear model and dictate a more generalizable basin in which the model can be further fine-tuned. Hence, it enables better learning of the core features.
The pseudocode is illustrated in Algorithm 1.
1A set function F : 2V → R+ is submodular if F (e|S) = F (S ∪ {e})− F (S) ≥ F (T ∪ {e})− F (T ), for any S ⊆ T ⊆ V and e ∈ V \ T . F is monotone if F (e|S) ≥ 0 for any e∈V \S and S ⊆ V .
4.4 EFFECT OF PRUNING ON EARLY NETWORK EVOLUTION
Next, we take a closer look at the effect of our method on the evolution of the model. In particular, we show that training on the subset S selected by our method decreases the speed of learning on large subpopulations and lets the other groups have a larger contribution to the initial phase of learning.
When the model is trained on the subset DS =(XS , yS), the weight evolution over one step can be written as ∆Swt = −∇L(wt,XS) = −ηJ (wt,XS)T∇f l(f(wt,XS), yS). (8) Furthermore, the network evolution can be approximated using a first-order Taylor expansion, i.e.,
∆Sf(wt,X ) = J (wt,X )∆Swt = −ηJ (wt,X )J (wt,XS)T∇f l(f(wt,XS), yS) (9) = −ηΘt(X,XS)∇f l(wt, (XS , yS)), (10)
where Θt(X,XS) = J (wt,X )J (wt,XS)T is the empirical neural tangent kernel, describing the evolution of the network when training only on the subset S. The following Lemma quantifies the effect of pruning the large subpopulation on the model evolution at one training step.
Lemma 4.1 Training on the subset S sampled from ζ subpopulations found by our method, with learning rate η ≤ 1/∥J (wt,X )∥ changes the predictions of the model at every step by at most: ∥∆f(wt,X )−∆Sf(wt,X )∥ = η∥Θt(X,X )∇f l(X,wt)−Θt(X,XS)∇f l(X,wt)∥ (11)
≤ ∑ z∈[ζ] |α′z − αz| · ∥max j∈Vz ∇l(f(wt,xj), yj)∥, (12)
where αz = |Vz| is the size of subpopulation Vz , and α′z = |Vz ∩ S| is its size in the subset S.
The proof can be found in Appendix A.1.
Lemma 4.1 upper-bounds how training on the subset found by our method changes the effect of different subpopulations on the model predictions. When the subpopulations are approximately balanced, we have α′z ≈ καz . Thus, training on the subset S yields similar network evolution to that of the full data, and only scales down the learning rate. However, when subpopulations are imbalanced, it effectively decreases the gradient force of large subpopulations by |αz − α′z| · ∥maxj∈Vz ∇f l(f(wt,xj), yj)∥. Effectively, this reduces the speed of learning and bias of such subpopulations on the model. On the other hand, our importance sampling method preserves the small subpopulations, i.e., αz ≈ α′z and maintains their original gradient force on the model. Therefore, our subset balances the gradient forces and let different subpopulations participate in forming the lower-complexity models in the initial epochs. As Fig. 3 shows, individual examples in large subpopulations have a smaller gradient norm. Hence, a larger number of them can be pruned without significantly affecting the model. However, entirely dropping the large subpopulations have a larger cumulative effect compared to dropping a smaller number of examples in small subpopulations with larger norms. Hence, it drastically harms the in-distribution performance.
5 EXPERIMENTS
In this section, we evaluate the effectiveness of our method in assisting neural networks to learn better features. In particular, we consider the following two scenarios. First, we apply our method to improve the worst-group generalization performance, when training data contains spurious correlation. Then, we consider the application of our method to improve out-of-distribution performance, under distribution shift. In both cases, we also compare the in-distribution generalization performance of the networks trained on our subset vs full training data.
5.1 WORST-GROUP GENERALIZATION IN PRESENCE OF SPURIOUS CORRELATION
First, we evaluate the worst-group generalization performance of a model trained on our subset vs full data, in presence of spurious correlation. We record gradient trajectories during the initial 4 epochs and select 10% training examples as the subset. The reported results are averaged over 3 runs. Datasets & Models. We apply our method to the Colored-MNIST and Waterbirds datasets. The Colored-MNIST dataset is a synthetic dataset derived from MNIST (LeCun et al., 1998). It was first proposed in (Alain et al., 2015) as a binary classification task that contains spurious correlations— the grey-scale digits are changed to colors that are strongly correlated with the labels. We use a 5- layer CNN with 2 convolutional layers and 3 fully-connected layers. The Waterbirds dataset is introduced by Sagawa et al. (2019) to study the spurious correlation between the background and the foreground in image recognition. Species in Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset (Wah et al., 2011) are grouped into two classes, waterbirds and landbirds. All birds are then cut and pasted onto new background images, with waterbirds more likely to appear on water and landbirds having a higher probability on land. There are 4795 training examples in total, 3498 for landbirds with land background, 184 for landbirds with water background, 56 for waterbirds with land background, and 1057 for waterbirds with water background. We use a pretrained ResNet-50 model. Baselines. Empirical risk minimization (ERM) trains on all data, Random selects a subset uniformly at random, Upweight weights every example by the inverse of the group size, Balanced samples an equal number of examples from different groups. Ablation. To show the failure mode of random sampling when the majority has imbalanced subpopulations, we modify the dataset to make it more imbalanced, by pruning smaller clusters. Evaluation metrics. We use two metrics proposed in Sagawa et al. (2019), namely worst-group accuracy and adjusted average accuracy. Worst-group accuracy is the minimum accuracy across all groups, and Adjusted average accuracy is the average accuracy over groups weighted by their size. Results. Table 1 shows that the models trained on subsets found by our method obtain the highest worst-group and in-distribution test accuracy, when compared with baselines that do not require group labels. Besides, our method achieves a comparable performance to those that use the group information, and even outperforms them on the Waterbird dataset. We note that having group labels is not available in real-world datasets. Methods that do not rely on group labels, including our method, do not require knowing the minority groups. Therefore, they are more practical in realistic settings.
GradCam. Fig. 4 demonstrates GradCAM (Selvaraju et al., 2017) visualizations depicting saliency maps for samples from the Waterbirds dataset with water and land backgrounds. Warmer colors denote higher saliency, suggesting that the model considered these pixels more important in making the final classification measured by gradient activations. We see that the subset found by our method allows the model to learn the core features much better than ERM and Random baselines.
5.2 OUT-OF-DISTRIBUTION GENERALIZATION UNDER DISTRIBUTION SHIFT
Next, we empirically evaluate the in-distribution. and out-of-distribution performance of our method under distribution shift. The results are based on 3 independent runs, each with a different mini-batch order and initial parameter values. We record gradient trajectories during the initial 4 epochs. Datasets. We apply our method to CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), and Caltech-256 (Griffin et al., 2007). In particular, we keep the number of training iterations fixed (78k for CIFAR-10 and CIFAR-100, and 4.8k for Caltech-256) as we vary the size of the subset. Baselines. We compare our method with Random sampling, and the state-of-the-art baselines for in-distribution data pruning, based on EL2N (Paul et al., 2021), forgetting scores (Toneva et al., 2018). The EL2N score of a training example i is defined as E∥∇f l(w,xi) − yi∥2. We calculate EL2N after 20 epochs of training and average it over 10 different runs, as this is shown to be the most accurate. Forgetting score of an example is the number of times the examples are misclassified after being correctly classified during the entire training. We calculate the number of forgetting events for each training example by averaging over 5 runs of 200 epochs, as suggested by Toneva et al. (2018). Results. Fig. 5 (a), (b), (c) show that on different datasets, training on the subset selected by our method gives much higher in-distribution test accuracy than Random, and EL2N or forgetting scores particularly when the subset is small. Note that El2N and forgettability baselines use more information over many training epochs and multiple runs. Importantly, Fig. 5 (d) confirms that our method outperforms the baselines on CIFAR-10C (Hendrycks & Dietterich, 2019), with distribution shift. We train on our downsampled CIFAR-10 training set, and test on CIFAR-10-C (Hendrycks & Dietterich, 2019), a collection of OOD test sets for CIFAR-10. For each corruption type, we report the average test accuracy over 5 different intensity levels. Our method can achieve at least 2% higher test accuracy than other baselines. For some corruption types (Gaussian noise, shot noise, and impulse noise), our performance is even on par with or surpasses training on the full data.
6 CONCLUSION
We showed that larger subpopulations containing spurious biases prevent learning high-quality features. We showed that large subpopulations can be identified by tracking gradient trajectory of examples in initial epochs. Then, we proposed an importance sampling method to balance the subpopulations and ensure inclusion of representative examples from all the subpopulations. Our experiments confirmed the effectiveness of our approach in eliminating spurious biases and learning higher-quality models with superior in- and out-of-distribution performance on various datasets.
A APPENDIX
A.1 PROOF OF LEMMA 4.1
The logits evolution at one step can be written as:
∥∆f(wt,X )−∆Sf(wt,X )∥ = η∥Θt(X,X )∇f l(X,wt)−Θt(X,XS)∇f l(X,wt)∥ (13) = η∥J (wt,X )J (wt,X )T∇f l(X,wt)−J (wt,X )J (wt,XS)T∇f l(XS ,wt)∥
(14)
≤ η∥J (wt,X )∥ · ∥J (wt,X )T∇f l(X,wt)− J (wt,XS)T∇f l(XS ,wt)∥ (15) ≤ ∥ ∑ i∈V ∇l(f(wt,xi))− ∑ j∈S ∇l(f(wt,xj))∥ (16)
≤ ∥ ∑ z∈[ζ] ∑ j∈Vz ∇l(f(wt,xj), yj)∥ (17)
≤ ∑ z∈[ζ] ∑ j∈Vz ∥∇l(f(wt,xj), yj)∥ (18)
≤ ∑ z∈[ζ] |α′z − αz| · ∥max j∈Vz ∇l(f(wt,xj), yj)∥, (19)
where Eq. equation 14 holds because η ≤ 1/J (wt,X ).
A.2 EXPERIMENTATION DETAILS
A.2.1 DATASETS
CMNIST We construct a colored MNIST dataset with spurious correlations by using colors as the spurious attributes as the following. First, we define an image classification task with 5 classes by mapping every 2 consecutive digits (0 and 1, 2 and 3, 4 and 5, 6 and 7, 8 and 9) into the same class. We use the official test split of MNIST, randomly select 50k examples from the train split as the training set, and then use the rest 10k samples in the train split as the validation set.
Then, for each class yi, we color the foreground of pcorr,i fraction of training examples with color ai from the set of colorsA={#ff0000, #85ff00, #00fff3, #6e00ff, #ff0018} represented by their hex codes. We call this fraction of data the majority group of class yi. The higher the pcorr,i, the stronger the spurious correlation between the class yi and the spurious attribute ai. For the rest 1−pcorr,i training examples, we color them with a random color fromA\ai. In Fig. 6, we visualize examples in 5 classes with 5 colors and highlight the majority groups with white bounding boxes. In our experiments, we used pcorr = [0.995, 0.95, 0.9, 0.8, 0.6] to construct spurious correlations with different strengths and groups with different sizes.
A.3 ADDITIONAL EXPERIMENTS | 1. What is the focus of the paper regarding deep learning and sub-sampling?
2. What are the strengths of the proposed approach, particularly in its motivation and importance sampling algorithm?
3. What are the weaknesses of the paper, especially concerning its technical aspects and experimental designs?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Good generalization performance of DL could be attributed to the undesirable overfitting spurious biases in large datasets. This apper proposed a way to conduct smart sub-sampling by tracing the gradient trajectories of initial examples. An importance sampling algorithm is then proposed to conduct sub-sampling.
Strengths And Weaknesses
Strength: overall very well written paper with strong and clear motivation.
Weakness: my concern is primarily on the technical side.
Tractable toy examples: While the high-level idea of tracking gradient seems to make sense, the introduction of clustering and dimensionality reduction inevitably introduce a lot of approximation errors into the process. The authors conducted experiments mainly w.r.t CIFAR cases, but it would that make more sense to design some analytical examples to demonstrate the effectiveness of this work, where things stay low-dimensional to begin with.
Importance sampling: the choice of inverse weight seems a bit arbitrary. Have the authors considered an ablation study on the different choices of IS proposals?
Improving the implementations: I don't think ERM is the SOTA baseline to compare with, as it is known to have sub-par performance. In addition, the authors might want to conduct more ablation studies w.r.t. the gradient tracking, as some of the techniques adopted are quite sensitive to seed initialization (the k-means clustering, for example).
Clarity, Quality, Novelty And Reproducibility
The Clarity, Quality and Novelty of this paper could be improved. I consider the Reproducibility to be fair as no code has been provided yet. |
ICLR | Title
When Majorities Prevent Learning: Eliminating Bias to Improve Worst-group and Out-of-distribution Generalization
Abstract
Modern neural networks trained on large datasets achieve state-of-the-art (indistribution) generalization performance on various tasks. However, their good generalization performance has been shown to be contributed largely to overfitting spurious biases in large datasets. This is evident by the poor generalization performance of such models on minorities and out-of-distribution data. To alleviate this issue, subsampling the majority groups has been shown to be very effective. However, it is not clear how to find the subgroups (e.g. within a class) in large real-world datasets. Besides, naively subsampling the majority groups can entirely deplete some of their smaller sub-populations and drastically harm the in-distribution performance. Here, we show that tracking gradient trajectories of examples in initial epochs allows for finding large subpopulations of data points. We leverage this observation and propose an importance sampling method that is biased towards selecting smaller subpopulations, and eliminates bias in the large subpopulations. Our experiments confirm the effectiveness of our approach in eliminating spurious biases and learning higher-quality models with superior inand out-of-distribution performance on various datasets.
1 INTRODUCTION
Large datasets have enabled modern neural networks to achieve unprecedented success on various tasks. Large datasets are, however, often heavily biased towards the data-rich head of the distribution (Le Bras et al., 2020; Sagawa et al., 2020; 2019). That means, there are large groups of potentially redundant data points belonging to majority subpopulations, and smaller groups of examples representing minorities. Larger groups often contain spurious biases, i.e., unintended but strong correlations between examples (e.g. image background) and their label. In such settings, overparameterized models learn to memorize the spurious features instead of the core features for the majority, and overfit the minorities (Sagawa et al., 2020). As a result, despite their superior performance on in-distribution data, overparameterized models trained on biased datasets often have a poor worst-group and out-of-distribution generalization performance.
To improve the high worst-group error and of out-of-distribution generalization, techniques such as distributionally robust optimization (DRO), or up-weighting the minority groups are commonly used (Sagawa et al., 2019; 2020). However, such methods have been shown to be highly ineffective for overparameterized models in the presence of spurious features (Sagawa et al., 2020). When majority groups are sufficiently large and the spurious features are strong, overparameterized models choose to exploit the spurious features for the majorities and memorize the minorities, as it entails less memorization on the entire data. In this setting, upweighting minorities only exacerbates spurious correlations, and subsampling the majorities has been advocated for (Sagawa et al., 2020). But, this requires the groups to be specified beforehand, which is not available for real-world datasets. Besides, random subsampling of the majority groups can entirely deplete some of their subpopulations and drastically harm the in-distribution performance (Toneva et al., 2018; Paul et al., 2021).
In this work, we propose an effective way to find large subpopulations of examples (see Fig. 1), and subsample them to ensure inclusion of representative examples from all the subpopulations. We rely on the following recent observations. In the initial training epochs, the network learns important
features and the NTK undergoes rapid changes, which determine its final basin of convergence (Fort et al., 2020). This results in learning a linear function during the initial epochs, followed by learning functions of increasing complexity (Nakkiran et al., 2019). We show that large subpopulations are responsible for forming the initial linear model, by inserting large gradient forces in the first few epochs. The minorities, on the other hand, dictate the higher-complexity functions later in training. To find the large subpopulations, we track the gradient trajectories—the way the gradient changes— during initial training epochs. Then, we cluster similar gradient trajectories together, and employ an importance sampling method that samples data points from every cluster by a probability equal to the inverse of the size of the cluster it belongs to. This allows selecting a balanced subset from different clusters. By studying the effect of our method on the evolution of the model early during the training, we show that our method allows the model to better learn from all the subpopulations by balancing the gradient forces between different groups. This enables learning higher-quality features.
Our empirical studies confirm the effectiveness of our method in improving the worst-group and out-of-distribution generalization, while enjoying a superior in-distribution performance even when the size of the selected sample is small. Notably, on CMNIST (Alain et al., 2015) and Waterbird (Sagawa et al., 2019) datasets which contain strong spurious biases, our method achieves a comparable or even better performance than the state-of-the-art methods, which rely on the underlying group information to uniformly subsample the majority group. In addition, on CIFAR10, CIFAR100 (Krizhevsky et al., 2009), and Caltech256 (Griffin et al., 2007) our method provides a superior indistribution performance to state-of-the-art data pruning methods, based on forgettability (Toneva et al., 2018) and El2N (Paul et al., 2021) scores, especially for small subsets. At the same time, it outperforms such methods on out-of-distribution data, CIFAR10C (Hendrycks & Dietterich, 2019).
2 RELATED WORK
Data pruning for worst-group generalization. To improve the generalization performance on minorities, preventing the model from learning spurious features is very helpful (Sagawa et al., 2019; 2020). For overparameterized models, randomly subsampling the majorities has been shown to be the most effective (Sagawa et al., 2020) than distributionally robust optimization (DRO) (Sagawa et al., 2019) and up-weighting the minority groups (Sagawa et al., 2020). However, this requires the group labels to be specified beforehand, which is not available for large real-world datasets. Besides, if the majority contains imbalanced subpopulations, random subsampling inherits similar biases. Finally, random subsampling of the majority groups can entirely deplete some of their smaller subpopulations and drastically harm the in-distribution performance, as we empirically show.
A different line of work (Sohoni et al., 2020; Nam et al., 2020; Ahmed et al., 2020; Liu et al., 2021; Creager et al., 2021; Taghanaki et al., 2021; Zhang et al., 2022; Nam et al., 2021) studies how to improve worst-group generalization without having access to group labels. These methods require training a model first to minimize the average empirical risk before training the robust model, which doubles the training time and is thus also not practical for large real-world datasets.
Data pruning for OOD generalization. Spurious features have been shown to also harm the outof-distribution generalization (Le Bras et al., 2020). To alleviate this, Swayamdipta et al. (2020) proposed to train on the subset of most ambiguous instances whose true class probabilities fluctuate frequently during training, and Le Bras et al. (2020) employed AFLite (Le Bras et al., 2020) to iteratively filter highly-predictable examples by training multiple linear classifiers on different random partitions of the data. However, such methods drastically harm the in-distribution performance.
Data pruning for in-distribution generalization. The main idea behind all data pruning methods is to define a notion of example difficulty, and prune the easy-to-learn examples. Notably, Coleman et al. (2020) used a smaller trained proxy model to find the most uncertain examples to train a larger model. Toneva et al. (2018) defined a forgetting event of an example as transitioning from being classified correctly to incorrectly during training, and drop the examples with no forgetting events. Most recently, Paul et al. (2021) dropped examples with the lowest average errors (EL2N) recorded early in training and averaged over several initializations. The above heuristics require full or partial training of multiple models, can only drop a relatively small fraction of the examples, and hurt the out-of-distribution performance, as we show in our experiments. In contrast, our method can successfully alleviate bias and achieve a superior in- and out-of-distribution performance.
3 PROBLEM FORMULATION
Training machine learning models is often reduced to minimizing an empirical risk function (ERM). That is, the goal is to find the parameter w∗ that minimizes the average error on the entire training data D = (X,y) = {(xi, yi)}i∈V , where V = {1, · · · , n} indexes the training data. Formally,
w∗ = argminw∈WL(w), L(w) = E(xi,yi)∈D[l(f(w,xi), yi))], (1)
wherew is the model parameter, and f(w,xi) and l(f(w,xi), yi)) are the output of the network and the value of the loss associated to a training example (xi, yi), respectively. For large datasets, the average error L is minimized by applying (Stochastic) Gradient Descent with learning rate η starting from a random initial point w0:
wt+1 = wt − η∇L(wt), ∇L(wt) = J (wt,X )T∇f l(f(wt,X ), y), (2)
where y = {yi}ni=1, X = {xi}ni=1, and J (w,X ) ∈ Rn×m is the Jacobian matrix associated with the nonlinear network f : Rd → Ro defined as
J (w,X ) = [∂f(w,x1)
∂w · · · ∂f(w,xn) ∂w
]T , (3)
and ∇f l(f(wt,XS), yS) is the gradient of the loss w.r.t. the network. Furthermore, Θt(X,X ) = J (wt,X )J (wt,X )T is the empirical neural tangent kernel (NTK) (Jacot et al., 2018; Du et al., 2018), describing the evolution of the network during training by gradient descent.
Spurious features and majority groups. We consider a similar setting with (Sagawa et al., 2019; 2020), where each training example (xi, yi) is associated with a spurious attribute ai that is correlated with its label yi. The examples with the same spurious attribute and label make a group gj,k ∈ G, where gj,k = {(xi, yi)|i ∈ V, ai = j, yi = k}. The groups which contain considerably more examples than the rest are referred to as majority groups. For example, in the Waterbirds dataset (Sagawa et al., 2019), every example (xi, yi) belongs to one of the 2 classes, yi ∈ {waterbird, landbird} and the image background ai ∈ {water background, land background} is spuriously correlated with the label yi. Thus, there are four groups of examples associated with every combination of spurious attribute and label, i.e., G ={(waterbird, water background), (waterbird, land background), (landbird, water background), (landbird, land background)}. The majority groups are (waterbird, water background), and (landbird, land background). Importantly, in this work, we assume that the groups and spurious attributes are not known at training time.
Subpopulations. Every dataset can be partitioned into s different subpopulations of examples that are similar in terms of their effect on training, i.e. the indices of the training data V can be partitioned into V = {V1, · · · , Vs}. For a formal definition, see Section 4.2. Note that subpopulations may represent a finer clustering compared to group clustering. Fig. 1 shows an illustration of groups vs. subpopulations for Waterbird dataset. We develop a method that automatically clusters the data and identifies large subpopulations.
Objective. Our goal is to find a subset S ⊆ V of size r = |S| from all training examples indexed by V , such that training on the subset alleviates the effect of spurious biases and improves (1) the worst-group generalization when the groups are imbalanced, or (2) out-of-distribution generalization under distribution shift. In both cases, we aim to preserve a good performance on the in-distribution data. In particular, the worst-group error is defined as,
Errwg = max g∈G Exi,yi|g[yi ̸= yf (w,xi)], (4)
where yf (w, (xi, yi) is the label predicted by the model. In other words, Errwg measures the highest fraction of examples that are incorrectly classified across all groups. Similarly, the out-of distribution (OOD) performance measures the performance of the model f trained on the training set D, and tested on D′ = (c(X ), y), when c is from a set of shifting functions C. Formally,
Errood = E(xi,yi)∈D′ [yi ̸= yf (w,xi)], (5)
measures the fraction of examples that are misclassified when (xi, yi) is drawn i.i.d. from D′.
4 ELIMINATING BIAS IN THE DATA
In this section, we present our main results. We start by discussing the effect of large subpopulations on early learning dynamics. Then, we explain how gradient trajectories of examples during the initial training epochs allow finding the large subpopulations. Next, we employ importance sampling to find a subset that contains a similar number of examples from subpopulations. Finally, we study how the subset found by our method affects the network’s early learning dynamics.
4.1 EFFECT OF LARGE SUBPOPULATIONS ON EARLY LEARNING DYNAMICS
Recent empirical studies on neural networks’ training dynamics show that in the initial epochs the performance of a network trained by SGD can be explained by a linear classifier. Formally, if F and L are the corresponding random variables for the neural network and a linear model respectively, the mutual information between F and y conditioned on L, I(F ;y|L), captures the part of F ’s success on the prediction of y in addition to L. Then the performance correlation between F and L, µy(F ;L) := I(F ;y) − I(F ;y|L), is the part of F ’s success on the prediction of y that can be explained by L. Nakkiran et al. (2019) show that there exists T0 such that µy(Ft;L) ≈ I(Ft;y) at training step t for all t < T0. As training progresses, the network learns functions of increasing complexity (Nakkiran et al., 2019). Furthermore, Fort et al. (2020) show that during the first few epochs of training, neural network experience a rapid initial transient which determines the final basin of convergence. During this period, the NTK changes very rapidly and learns useful features.
First, we empirically show that large subpopulations are responsible for forming the initial linear model in the first few training epochs. Effectively, during training every example contributes to minimizing the loss by its gradient. Examples with similar gradients insert a similar force on the model and affect the model in the same direction. In the first few epochs, large gradient forces of large subpopulations highly bias the initial linear function. As the gradient forces of large subpopulations persist during training, the initially learned linear function is retained, to the point of zero training error (Nakkiran et al., 2019). As a result, large subpopulations dictate the rapid initial change of the NTK and the prominent features learned in this phase. On the other hand, smaller subpopulations have a smaller influence on the model, and require a larger number of training iterations to be learned by the higher-complexity functions that are shaped later during the training.
When the spurious bias is strong, the initial linear function is dictated mainly by the spurious feature, and persists during training. This provides a good training and generalization error on the large groups, and thus prevents learning their core features . On the other hand, on the small subpopulations, functions with much higher complexity overfit and memorize the minorities. Such functions result in a small training error but a poor worst-group generalization performance on the minorities. This further explains the observation by Sagawa et al. (2020) showing that overparameterized models memorize the spurious feature and overfit the minorities. Effectively, the spurious features in large subpopulations prevent the model from learning the core features from the data.
Fig. 2a shows that the network’s prediction early in training can be well explained by a linear classifier (red). Besides, the network trained on large subpopulations can be well explained by the same
linear classifier (purple). This confirms that early training dynamics are dictated by large subpopulations. We also see that the behavior of the network trained on the subset selected by our method (discussed in Section 4.3) cannot be explained by the same linear model (brown). Fig. 2b shows that the linear model explaining the network trained on full data (blue) is similar to the linear model explaining the network trained on large subpopulations (orange), but different than the one explaining the network trained on our chosen subset (green). We see that the performance of the network can be well explained by a linear mode, and the linear model fitted on large subpopulations closely matches the one fitted on the entire data. These results further confirm that the linear classifiers fitted to entire data and large subpopulations are effectively the same during the initial training epochs.
4.2 FINDING THE LARGE SUBPOPULATIONS IN EVERY CLASS
The first question we aim to answer is how to find the large subpopulations of the data, without having such labels. As discussed, larger subpopulations insert a large gradient force on the model, and are learned during the initial epochs. When an example is learned, its gradient becomes nearly zero. Hence, every example has a gradient trajectory interpolating between its gradient at initialization and zero. Subpopulations that affect the model similarly have a similar gradient trajectory during training. Therefore, large subpopulations with similar gradient trajectories can be identified based on their gradient trajectory during the first few epochs.
To find the large subpopulations, we cluster the gradient trajectories during the initial epochs of training. As gradients are very high-dimensional, we first reduce the gradient dimensionality to better find the clusters. To do so, we rely on the following observation: for neural networks, the variation of the gradient norms is mostly captured by the gradient of the loss w.r.t. the input to the last layer of the network (Katharopoulos & Fleuret, 2018). The above lower-dimensional gradients can be efficiently computed in a closed form, and has been used as a gradient proxy in several recent works (Mirzasoleiman et al., 2020; Paul et al., 2021; Pooladzandi et al., 2022). Formally, for every example i we build its gradient trajectory by concatenating the lower-dimensional gradients during the first t training epochs, i.e.,
∇0:tf l(xi, yi) = [∇f l(f(w0,xi), yi),∇f l(f(w1,xi), yi), · · · ,∇f l(f(wt,xi), yi)], (6)
where∇f l(f(wj ,xi), yi) is the gradient of the loss w.r.t. the input to the last layer of the network at epoch j for training example (xi, yi). Note that as the gradient of an example depends on its label, examples from different classes do not have a similar gradient. Hence, we find similar gradient trajectories from every class separately.
Next, we cluster gradient trajectories to find the large subpopulations in every class. While any clustering algorithm can be used, we use the k-medoids objective to find the clusters efficiently. In particular, for 0 < κ < 1, we partition a class indexed by Vc ⊆ V to kc = κ · |Vc| subpopulations, by first finding the set Sc of its kc most centrally located gradient trajectories (medoids) by solving:
S∗c ∈ argmax S⊆Vc, |S|≤kc F (S) s.t. F (S) := ∑ i∈Vc max j∈Sc (cnt− ∥∇0:tf l(xi, yi)−∇0:tf l(xj , yj)∥), (7)
Algorithm 1 Training without Bias Input: Model f , initial epoch number t, subset fraction κ Output: Model f trained without bias
1: Train the model f for t epochs fromw0 and save gradient trajectories∇0:tf l(xi, yi) for all i ∈ V 2: for every class Vc do 3: Sc ← ∅ 4: for i = 1, 2, · · · , κ · |Vc| do 5: j ∈ argmaxe∈V \ScF (e|Sc) 6: Sc = Sc ∪ {j} 7: for i ∈ |Sc| do 8: Vc,i = {j ∈ Vc|i = argmin∥∇lf (w,xi)−∇lf (w,xj)∥ 9: for j ∈ V do
10: wj = |Vc,i| s.t. j ∈ Vc,i 11: pi=u 1/wi i s.t. ui∈(0, 1) is a uniform random number 12: S = {r examples with the largest pi} 13: Train the model f from w0 on S
where cnt is a large constant. Then to find the subpopulations, we assign every example to the medoid j ∈ S with the most similar trajectory. This partitions examples in class Vc to kc subpopulations Vc = {Vc,1, · · ·, Vc,kc}, where Vc,j = {i∈Vc|j=argmins∈Sc∥∇ 0:t f l(xi, yi)−∇0:tf l(xr, ys)∥}.
The maximization problem (Eq. (7)) is NP-hard. However, since the k-medoids objective is monotone and submodular1, a near-optimal solution of size k can be found efficiently in O(|V | · k) time. For maximizing a monotone submodular function, the greedy algorithm provides a (1 − 1/e) approximation guarantee (Wolsey, 1982). The greedy algorithm starts with the empty set S0 = ∅, and at each iteration l, chooses an element e∈V such that Sl = Sl−1 ∪ {argmaxe∈V F (e|Sl−1)}.
4.3 BALANCING THE SUBPOPULATIONS
To alleviate the bias of the large subpopulations and enable effective learning of core features, we aim to prune the large gradient trajectory clusters formed in initial epochs. This prevents the initial linear model from being biased toward the large subpopulations. In doing so, we allow the initial linear model to capture the complexity in different subpopulations, and learn the core features instead of the spurious features of the majorities. Hence, the model obtains a better generalization performance on minorities and out-of-distribution data. However, this should be done carefully as over-pruning the large subpopulations prevents them from participating in forming the initial model. This drastically harms the in-distribution generalization performance of the model.
To address this, we employ an importance sampling method on the union of the subpopulations of all classes, to select every example by probability equal to the inverse of the size of the subpopulation it belongs to. In particular, we weigh every example i ∈ Vc,j by the size of the cluster j ∈ Sc it belongs to, i.e., wi = |Vc,j |. Then, we use the algorithm of Efraimidis & Spirakis (2006) to select a sample with probabilities equal to pi=1/wi, without replacement. The sampling procedure works as follows. For each example i in the dataset, we independently generate a uniform random number ui∈(0, 1) and calculate qi=u1/wii . Examples that possess the r largest qi form the final subset S. Our sampling method biases the sample selection towards the smaller subpopulations, and drops many examples from the larger subpopulations. However, it still preserves the patterns in larger subpopulations, by including a smaller number of their examples in the sample. Effectively, our method balances the gradient forces between different subpopulations. This increases the strength of the core gradient vs. the spurious gradient. In doing so, it allows different subpopulations to participate in forming the initial linear model and dictate a more generalizable basin in which the model can be further fine-tuned. Hence, it enables better learning of the core features.
The pseudocode is illustrated in Algorithm 1.
1A set function F : 2V → R+ is submodular if F (e|S) = F (S ∪ {e})− F (S) ≥ F (T ∪ {e})− F (T ), for any S ⊆ T ⊆ V and e ∈ V \ T . F is monotone if F (e|S) ≥ 0 for any e∈V \S and S ⊆ V .
4.4 EFFECT OF PRUNING ON EARLY NETWORK EVOLUTION
Next, we take a closer look at the effect of our method on the evolution of the model. In particular, we show that training on the subset S selected by our method decreases the speed of learning on large subpopulations and lets the other groups have a larger contribution to the initial phase of learning.
When the model is trained on the subset DS =(XS , yS), the weight evolution over one step can be written as ∆Swt = −∇L(wt,XS) = −ηJ (wt,XS)T∇f l(f(wt,XS), yS). (8) Furthermore, the network evolution can be approximated using a first-order Taylor expansion, i.e.,
∆Sf(wt,X ) = J (wt,X )∆Swt = −ηJ (wt,X )J (wt,XS)T∇f l(f(wt,XS), yS) (9) = −ηΘt(X,XS)∇f l(wt, (XS , yS)), (10)
where Θt(X,XS) = J (wt,X )J (wt,XS)T is the empirical neural tangent kernel, describing the evolution of the network when training only on the subset S. The following Lemma quantifies the effect of pruning the large subpopulation on the model evolution at one training step.
Lemma 4.1 Training on the subset S sampled from ζ subpopulations found by our method, with learning rate η ≤ 1/∥J (wt,X )∥ changes the predictions of the model at every step by at most: ∥∆f(wt,X )−∆Sf(wt,X )∥ = η∥Θt(X,X )∇f l(X,wt)−Θt(X,XS)∇f l(X,wt)∥ (11)
≤ ∑ z∈[ζ] |α′z − αz| · ∥max j∈Vz ∇l(f(wt,xj), yj)∥, (12)
where αz = |Vz| is the size of subpopulation Vz , and α′z = |Vz ∩ S| is its size in the subset S.
The proof can be found in Appendix A.1.
Lemma 4.1 upper-bounds how training on the subset found by our method changes the effect of different subpopulations on the model predictions. When the subpopulations are approximately balanced, we have α′z ≈ καz . Thus, training on the subset S yields similar network evolution to that of the full data, and only scales down the learning rate. However, when subpopulations are imbalanced, it effectively decreases the gradient force of large subpopulations by |αz − α′z| · ∥maxj∈Vz ∇f l(f(wt,xj), yj)∥. Effectively, this reduces the speed of learning and bias of such subpopulations on the model. On the other hand, our importance sampling method preserves the small subpopulations, i.e., αz ≈ α′z and maintains their original gradient force on the model. Therefore, our subset balances the gradient forces and let different subpopulations participate in forming the lower-complexity models in the initial epochs. As Fig. 3 shows, individual examples in large subpopulations have a smaller gradient norm. Hence, a larger number of them can be pruned without significantly affecting the model. However, entirely dropping the large subpopulations have a larger cumulative effect compared to dropping a smaller number of examples in small subpopulations with larger norms. Hence, it drastically harms the in-distribution performance.
5 EXPERIMENTS
In this section, we evaluate the effectiveness of our method in assisting neural networks to learn better features. In particular, we consider the following two scenarios. First, we apply our method to improve the worst-group generalization performance, when training data contains spurious correlation. Then, we consider the application of our method to improve out-of-distribution performance, under distribution shift. In both cases, we also compare the in-distribution generalization performance of the networks trained on our subset vs full training data.
5.1 WORST-GROUP GENERALIZATION IN PRESENCE OF SPURIOUS CORRELATION
First, we evaluate the worst-group generalization performance of a model trained on our subset vs full data, in presence of spurious correlation. We record gradient trajectories during the initial 4 epochs and select 10% training examples as the subset. The reported results are averaged over 3 runs. Datasets & Models. We apply our method to the Colored-MNIST and Waterbirds datasets. The Colored-MNIST dataset is a synthetic dataset derived from MNIST (LeCun et al., 1998). It was first proposed in (Alain et al., 2015) as a binary classification task that contains spurious correlations— the grey-scale digits are changed to colors that are strongly correlated with the labels. We use a 5- layer CNN with 2 convolutional layers and 3 fully-connected layers. The Waterbirds dataset is introduced by Sagawa et al. (2019) to study the spurious correlation between the background and the foreground in image recognition. Species in Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset (Wah et al., 2011) are grouped into two classes, waterbirds and landbirds. All birds are then cut and pasted onto new background images, with waterbirds more likely to appear on water and landbirds having a higher probability on land. There are 4795 training examples in total, 3498 for landbirds with land background, 184 for landbirds with water background, 56 for waterbirds with land background, and 1057 for waterbirds with water background. We use a pretrained ResNet-50 model. Baselines. Empirical risk minimization (ERM) trains on all data, Random selects a subset uniformly at random, Upweight weights every example by the inverse of the group size, Balanced samples an equal number of examples from different groups. Ablation. To show the failure mode of random sampling when the majority has imbalanced subpopulations, we modify the dataset to make it more imbalanced, by pruning smaller clusters. Evaluation metrics. We use two metrics proposed in Sagawa et al. (2019), namely worst-group accuracy and adjusted average accuracy. Worst-group accuracy is the minimum accuracy across all groups, and Adjusted average accuracy is the average accuracy over groups weighted by their size. Results. Table 1 shows that the models trained on subsets found by our method obtain the highest worst-group and in-distribution test accuracy, when compared with baselines that do not require group labels. Besides, our method achieves a comparable performance to those that use the group information, and even outperforms them on the Waterbird dataset. We note that having group labels is not available in real-world datasets. Methods that do not rely on group labels, including our method, do not require knowing the minority groups. Therefore, they are more practical in realistic settings.
GradCam. Fig. 4 demonstrates GradCAM (Selvaraju et al., 2017) visualizations depicting saliency maps for samples from the Waterbirds dataset with water and land backgrounds. Warmer colors denote higher saliency, suggesting that the model considered these pixels more important in making the final classification measured by gradient activations. We see that the subset found by our method allows the model to learn the core features much better than ERM and Random baselines.
5.2 OUT-OF-DISTRIBUTION GENERALIZATION UNDER DISTRIBUTION SHIFT
Next, we empirically evaluate the in-distribution. and out-of-distribution performance of our method under distribution shift. The results are based on 3 independent runs, each with a different mini-batch order and initial parameter values. We record gradient trajectories during the initial 4 epochs. Datasets. We apply our method to CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), and Caltech-256 (Griffin et al., 2007). In particular, we keep the number of training iterations fixed (78k for CIFAR-10 and CIFAR-100, and 4.8k for Caltech-256) as we vary the size of the subset. Baselines. We compare our method with Random sampling, and the state-of-the-art baselines for in-distribution data pruning, based on EL2N (Paul et al., 2021), forgetting scores (Toneva et al., 2018). The EL2N score of a training example i is defined as E∥∇f l(w,xi) − yi∥2. We calculate EL2N after 20 epochs of training and average it over 10 different runs, as this is shown to be the most accurate. Forgetting score of an example is the number of times the examples are misclassified after being correctly classified during the entire training. We calculate the number of forgetting events for each training example by averaging over 5 runs of 200 epochs, as suggested by Toneva et al. (2018). Results. Fig. 5 (a), (b), (c) show that on different datasets, training on the subset selected by our method gives much higher in-distribution test accuracy than Random, and EL2N or forgetting scores particularly when the subset is small. Note that El2N and forgettability baselines use more information over many training epochs and multiple runs. Importantly, Fig. 5 (d) confirms that our method outperforms the baselines on CIFAR-10C (Hendrycks & Dietterich, 2019), with distribution shift. We train on our downsampled CIFAR-10 training set, and test on CIFAR-10-C (Hendrycks & Dietterich, 2019), a collection of OOD test sets for CIFAR-10. For each corruption type, we report the average test accuracy over 5 different intensity levels. Our method can achieve at least 2% higher test accuracy than other baselines. For some corruption types (Gaussian noise, shot noise, and impulse noise), our performance is even on par with or surpasses training on the full data.
6 CONCLUSION
We showed that larger subpopulations containing spurious biases prevent learning high-quality features. We showed that large subpopulations can be identified by tracking gradient trajectory of examples in initial epochs. Then, we proposed an importance sampling method to balance the subpopulations and ensure inclusion of representative examples from all the subpopulations. Our experiments confirmed the effectiveness of our approach in eliminating spurious biases and learning higher-quality models with superior in- and out-of-distribution performance on various datasets.
A APPENDIX
A.1 PROOF OF LEMMA 4.1
The logits evolution at one step can be written as:
∥∆f(wt,X )−∆Sf(wt,X )∥ = η∥Θt(X,X )∇f l(X,wt)−Θt(X,XS)∇f l(X,wt)∥ (13) = η∥J (wt,X )J (wt,X )T∇f l(X,wt)−J (wt,X )J (wt,XS)T∇f l(XS ,wt)∥
(14)
≤ η∥J (wt,X )∥ · ∥J (wt,X )T∇f l(X,wt)− J (wt,XS)T∇f l(XS ,wt)∥ (15) ≤ ∥ ∑ i∈V ∇l(f(wt,xi))− ∑ j∈S ∇l(f(wt,xj))∥ (16)
≤ ∥ ∑ z∈[ζ] ∑ j∈Vz ∇l(f(wt,xj), yj)∥ (17)
≤ ∑ z∈[ζ] ∑ j∈Vz ∥∇l(f(wt,xj), yj)∥ (18)
≤ ∑ z∈[ζ] |α′z − αz| · ∥max j∈Vz ∇l(f(wt,xj), yj)∥, (19)
where Eq. equation 14 holds because η ≤ 1/J (wt,X ).
A.2 EXPERIMENTATION DETAILS
A.2.1 DATASETS
CMNIST We construct a colored MNIST dataset with spurious correlations by using colors as the spurious attributes as the following. First, we define an image classification task with 5 classes by mapping every 2 consecutive digits (0 and 1, 2 and 3, 4 and 5, 6 and 7, 8 and 9) into the same class. We use the official test split of MNIST, randomly select 50k examples from the train split as the training set, and then use the rest 10k samples in the train split as the validation set.
Then, for each class yi, we color the foreground of pcorr,i fraction of training examples with color ai from the set of colorsA={#ff0000, #85ff00, #00fff3, #6e00ff, #ff0018} represented by their hex codes. We call this fraction of data the majority group of class yi. The higher the pcorr,i, the stronger the spurious correlation between the class yi and the spurious attribute ai. For the rest 1−pcorr,i training examples, we color them with a random color fromA\ai. In Fig. 6, we visualize examples in 5 classes with 5 colors and highlight the majority groups with white bounding boxes. In our experiments, we used pcorr = [0.995, 0.95, 0.9, 0.8, 0.6] to construct spurious correlations with different strengths and groups with different sizes.
A.3 ADDITIONAL EXPERIMENTS | 1. What is the main contribution of the paper, and how does it improve upon previous methods?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to identify and debias spurious features?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the experimental results, such as the choice of baseline methods, sensitivity analysis, and hyperparameter tuning?
5. How does the paper address the problem of OOD generalization, and what are the implications of the proposed solution for this issue? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a method to up weight the smaller sub-populations that are found by performing clustering of gradient trajectories. This method is improved upon Group-DRO, which automatically identifies groups and prevents overfitting on spurious correlations. In order to achieve that, they use the premise that spurious features are abundant in majority groups and large subpopulations with similar gradient trajectories can be identified based on their gradient trajectory during the first few epochs. Their main contribution is to introduce an automatic clustering process that identifies large groups with spurious features and eliminating these biases improves worst-case and OOD performance. The authors validates the effectiveness of their methods on crafted datasets (Waterbird and CMNIST) and general image classification datasets ( CIFAR and Caltech-256). Improvements are shown in both classification accuracy and Gradcam saliency.
Strengths And Weaknesses
Strength:
An elegant solution for an important problem. DRO is intractable in deep learning and manually crafted groups are introduced to solve this problem. Automatic 1) selection of shared-feature groups, 2) identification of data with spurious features, and 3) debiasing has been an unsolved problem. This paper uses gradient-related statistics to identify majority groups and can inspire many future works in this area.
The experiment results are competitive, especially for debiasing and OOD generalization.
Weakness:
The motivation has not been stated clearly enough. The authors in fact introduce three concepts "long-tail"->"spurious features & debiasing"->"OOD & worst-case generalization". The link between majority group (long tail) and improved generalization is a bit weak and the author could better elucidate this connection. You may find these papers [1] [2] helpful on this subject.
I'm confused by the motivation for using gradient trajectories to identify majority groups. "In the initial training epochs, the network learns important features and the NTK undergoes rapid changes, which determine its final basin of convergence" and "As a result, the large subpopulations dictate the rapid initial change of the NTK and the prominent features learned in this phase". The authors seem to be motivated by the dynamics of NTK for identifying majority groups. Therefore, I'm confused why they chose gradient trajectory rather than gram matrix trajectory as the indicator.
Problem with Lemma 4.1
I think the result is not specific to this particular choice of grouping based on gradient. Actually, it seems to be only affected by group size and this result is rather trivial as balanced groups would be less affected by reweighting.
Experiments
ERM is a fairly weak baseline, and comparison with DRO and EL2N are missing in Table 1.
Gradient trajectories are instable and sensitive to hyperparameters. Sensitivity analysis and error bars are missing (CMNIST, Figure 5d). Also, a large variety of models should be chosen and analyzed, as different backbones have very different training dynamics.
Code and hyperparameters are not provided.
[1] Ming Y, Yin H, Li Y. On the impact of spurious correlation for out-of-distribution detection[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2022, 36(9): 10051-10059.
[2] Tang, Kaihua, et al. "Invariant feature learning for generalized long-tailed classification." arXiv preprint arXiv:2207.09504 (2022).
Clarity, Quality, Novelty And Reproducibility
Clarity: The writing is easy to understand, but there are many typos. Novelty: It seems quite novel to me using gradients for identifying biased groups. Reproducibility: Code and hyperparameters are not provided. |
ICLR | Title
When Majorities Prevent Learning: Eliminating Bias to Improve Worst-group and Out-of-distribution Generalization
Abstract
Modern neural networks trained on large datasets achieve state-of-the-art (indistribution) generalization performance on various tasks. However, their good generalization performance has been shown to be contributed largely to overfitting spurious biases in large datasets. This is evident by the poor generalization performance of such models on minorities and out-of-distribution data. To alleviate this issue, subsampling the majority groups has been shown to be very effective. However, it is not clear how to find the subgroups (e.g. within a class) in large real-world datasets. Besides, naively subsampling the majority groups can entirely deplete some of their smaller sub-populations and drastically harm the in-distribution performance. Here, we show that tracking gradient trajectories of examples in initial epochs allows for finding large subpopulations of data points. We leverage this observation and propose an importance sampling method that is biased towards selecting smaller subpopulations, and eliminates bias in the large subpopulations. Our experiments confirm the effectiveness of our approach in eliminating spurious biases and learning higher-quality models with superior inand out-of-distribution performance on various datasets.
1 INTRODUCTION
Large datasets have enabled modern neural networks to achieve unprecedented success on various tasks. Large datasets are, however, often heavily biased towards the data-rich head of the distribution (Le Bras et al., 2020; Sagawa et al., 2020; 2019). That means, there are large groups of potentially redundant data points belonging to majority subpopulations, and smaller groups of examples representing minorities. Larger groups often contain spurious biases, i.e., unintended but strong correlations between examples (e.g. image background) and their label. In such settings, overparameterized models learn to memorize the spurious features instead of the core features for the majority, and overfit the minorities (Sagawa et al., 2020). As a result, despite their superior performance on in-distribution data, overparameterized models trained on biased datasets often have a poor worst-group and out-of-distribution generalization performance.
To improve the high worst-group error and of out-of-distribution generalization, techniques such as distributionally robust optimization (DRO), or up-weighting the minority groups are commonly used (Sagawa et al., 2019; 2020). However, such methods have been shown to be highly ineffective for overparameterized models in the presence of spurious features (Sagawa et al., 2020). When majority groups are sufficiently large and the spurious features are strong, overparameterized models choose to exploit the spurious features for the majorities and memorize the minorities, as it entails less memorization on the entire data. In this setting, upweighting minorities only exacerbates spurious correlations, and subsampling the majorities has been advocated for (Sagawa et al., 2020). But, this requires the groups to be specified beforehand, which is not available for real-world datasets. Besides, random subsampling of the majority groups can entirely deplete some of their subpopulations and drastically harm the in-distribution performance (Toneva et al., 2018; Paul et al., 2021).
In this work, we propose an effective way to find large subpopulations of examples (see Fig. 1), and subsample them to ensure inclusion of representative examples from all the subpopulations. We rely on the following recent observations. In the initial training epochs, the network learns important
features and the NTK undergoes rapid changes, which determine its final basin of convergence (Fort et al., 2020). This results in learning a linear function during the initial epochs, followed by learning functions of increasing complexity (Nakkiran et al., 2019). We show that large subpopulations are responsible for forming the initial linear model, by inserting large gradient forces in the first few epochs. The minorities, on the other hand, dictate the higher-complexity functions later in training. To find the large subpopulations, we track the gradient trajectories—the way the gradient changes— during initial training epochs. Then, we cluster similar gradient trajectories together, and employ an importance sampling method that samples data points from every cluster by a probability equal to the inverse of the size of the cluster it belongs to. This allows selecting a balanced subset from different clusters. By studying the effect of our method on the evolution of the model early during the training, we show that our method allows the model to better learn from all the subpopulations by balancing the gradient forces between different groups. This enables learning higher-quality features.
Our empirical studies confirm the effectiveness of our method in improving the worst-group and out-of-distribution generalization, while enjoying a superior in-distribution performance even when the size of the selected sample is small. Notably, on CMNIST (Alain et al., 2015) and Waterbird (Sagawa et al., 2019) datasets which contain strong spurious biases, our method achieves a comparable or even better performance than the state-of-the-art methods, which rely on the underlying group information to uniformly subsample the majority group. In addition, on CIFAR10, CIFAR100 (Krizhevsky et al., 2009), and Caltech256 (Griffin et al., 2007) our method provides a superior indistribution performance to state-of-the-art data pruning methods, based on forgettability (Toneva et al., 2018) and El2N (Paul et al., 2021) scores, especially for small subsets. At the same time, it outperforms such methods on out-of-distribution data, CIFAR10C (Hendrycks & Dietterich, 2019).
2 RELATED WORK
Data pruning for worst-group generalization. To improve the generalization performance on minorities, preventing the model from learning spurious features is very helpful (Sagawa et al., 2019; 2020). For overparameterized models, randomly subsampling the majorities has been shown to be the most effective (Sagawa et al., 2020) than distributionally robust optimization (DRO) (Sagawa et al., 2019) and up-weighting the minority groups (Sagawa et al., 2020). However, this requires the group labels to be specified beforehand, which is not available for large real-world datasets. Besides, if the majority contains imbalanced subpopulations, random subsampling inherits similar biases. Finally, random subsampling of the majority groups can entirely deplete some of their smaller subpopulations and drastically harm the in-distribution performance, as we empirically show.
A different line of work (Sohoni et al., 2020; Nam et al., 2020; Ahmed et al., 2020; Liu et al., 2021; Creager et al., 2021; Taghanaki et al., 2021; Zhang et al., 2022; Nam et al., 2021) studies how to improve worst-group generalization without having access to group labels. These methods require training a model first to minimize the average empirical risk before training the robust model, which doubles the training time and is thus also not practical for large real-world datasets.
Data pruning for OOD generalization. Spurious features have been shown to also harm the outof-distribution generalization (Le Bras et al., 2020). To alleviate this, Swayamdipta et al. (2020) proposed to train on the subset of most ambiguous instances whose true class probabilities fluctuate frequently during training, and Le Bras et al. (2020) employed AFLite (Le Bras et al., 2020) to iteratively filter highly-predictable examples by training multiple linear classifiers on different random partitions of the data. However, such methods drastically harm the in-distribution performance.
Data pruning for in-distribution generalization. The main idea behind all data pruning methods is to define a notion of example difficulty, and prune the easy-to-learn examples. Notably, Coleman et al. (2020) used a smaller trained proxy model to find the most uncertain examples to train a larger model. Toneva et al. (2018) defined a forgetting event of an example as transitioning from being classified correctly to incorrectly during training, and drop the examples with no forgetting events. Most recently, Paul et al. (2021) dropped examples with the lowest average errors (EL2N) recorded early in training and averaged over several initializations. The above heuristics require full or partial training of multiple models, can only drop a relatively small fraction of the examples, and hurt the out-of-distribution performance, as we show in our experiments. In contrast, our method can successfully alleviate bias and achieve a superior in- and out-of-distribution performance.
3 PROBLEM FORMULATION
Training machine learning models is often reduced to minimizing an empirical risk function (ERM). That is, the goal is to find the parameter w∗ that minimizes the average error on the entire training data D = (X,y) = {(xi, yi)}i∈V , where V = {1, · · · , n} indexes the training data. Formally,
w∗ = argminw∈WL(w), L(w) = E(xi,yi)∈D[l(f(w,xi), yi))], (1)
wherew is the model parameter, and f(w,xi) and l(f(w,xi), yi)) are the output of the network and the value of the loss associated to a training example (xi, yi), respectively. For large datasets, the average error L is minimized by applying (Stochastic) Gradient Descent with learning rate η starting from a random initial point w0:
wt+1 = wt − η∇L(wt), ∇L(wt) = J (wt,X )T∇f l(f(wt,X ), y), (2)
where y = {yi}ni=1, X = {xi}ni=1, and J (w,X ) ∈ Rn×m is the Jacobian matrix associated with the nonlinear network f : Rd → Ro defined as
J (w,X ) = [∂f(w,x1)
∂w · · · ∂f(w,xn) ∂w
]T , (3)
and ∇f l(f(wt,XS), yS) is the gradient of the loss w.r.t. the network. Furthermore, Θt(X,X ) = J (wt,X )J (wt,X )T is the empirical neural tangent kernel (NTK) (Jacot et al., 2018; Du et al., 2018), describing the evolution of the network during training by gradient descent.
Spurious features and majority groups. We consider a similar setting with (Sagawa et al., 2019; 2020), where each training example (xi, yi) is associated with a spurious attribute ai that is correlated with its label yi. The examples with the same spurious attribute and label make a group gj,k ∈ G, where gj,k = {(xi, yi)|i ∈ V, ai = j, yi = k}. The groups which contain considerably more examples than the rest are referred to as majority groups. For example, in the Waterbirds dataset (Sagawa et al., 2019), every example (xi, yi) belongs to one of the 2 classes, yi ∈ {waterbird, landbird} and the image background ai ∈ {water background, land background} is spuriously correlated with the label yi. Thus, there are four groups of examples associated with every combination of spurious attribute and label, i.e., G ={(waterbird, water background), (waterbird, land background), (landbird, water background), (landbird, land background)}. The majority groups are (waterbird, water background), and (landbird, land background). Importantly, in this work, we assume that the groups and spurious attributes are not known at training time.
Subpopulations. Every dataset can be partitioned into s different subpopulations of examples that are similar in terms of their effect on training, i.e. the indices of the training data V can be partitioned into V = {V1, · · · , Vs}. For a formal definition, see Section 4.2. Note that subpopulations may represent a finer clustering compared to group clustering. Fig. 1 shows an illustration of groups vs. subpopulations for Waterbird dataset. We develop a method that automatically clusters the data and identifies large subpopulations.
Objective. Our goal is to find a subset S ⊆ V of size r = |S| from all training examples indexed by V , such that training on the subset alleviates the effect of spurious biases and improves (1) the worst-group generalization when the groups are imbalanced, or (2) out-of-distribution generalization under distribution shift. In both cases, we aim to preserve a good performance on the in-distribution data. In particular, the worst-group error is defined as,
Errwg = max g∈G Exi,yi|g[yi ̸= yf (w,xi)], (4)
where yf (w, (xi, yi) is the label predicted by the model. In other words, Errwg measures the highest fraction of examples that are incorrectly classified across all groups. Similarly, the out-of distribution (OOD) performance measures the performance of the model f trained on the training set D, and tested on D′ = (c(X ), y), when c is from a set of shifting functions C. Formally,
Errood = E(xi,yi)∈D′ [yi ̸= yf (w,xi)], (5)
measures the fraction of examples that are misclassified when (xi, yi) is drawn i.i.d. from D′.
4 ELIMINATING BIAS IN THE DATA
In this section, we present our main results. We start by discussing the effect of large subpopulations on early learning dynamics. Then, we explain how gradient trajectories of examples during the initial training epochs allow finding the large subpopulations. Next, we employ importance sampling to find a subset that contains a similar number of examples from subpopulations. Finally, we study how the subset found by our method affects the network’s early learning dynamics.
4.1 EFFECT OF LARGE SUBPOPULATIONS ON EARLY LEARNING DYNAMICS
Recent empirical studies on neural networks’ training dynamics show that in the initial epochs the performance of a network trained by SGD can be explained by a linear classifier. Formally, if F and L are the corresponding random variables for the neural network and a linear model respectively, the mutual information between F and y conditioned on L, I(F ;y|L), captures the part of F ’s success on the prediction of y in addition to L. Then the performance correlation between F and L, µy(F ;L) := I(F ;y) − I(F ;y|L), is the part of F ’s success on the prediction of y that can be explained by L. Nakkiran et al. (2019) show that there exists T0 such that µy(Ft;L) ≈ I(Ft;y) at training step t for all t < T0. As training progresses, the network learns functions of increasing complexity (Nakkiran et al., 2019). Furthermore, Fort et al. (2020) show that during the first few epochs of training, neural network experience a rapid initial transient which determines the final basin of convergence. During this period, the NTK changes very rapidly and learns useful features.
First, we empirically show that large subpopulations are responsible for forming the initial linear model in the first few training epochs. Effectively, during training every example contributes to minimizing the loss by its gradient. Examples with similar gradients insert a similar force on the model and affect the model in the same direction. In the first few epochs, large gradient forces of large subpopulations highly bias the initial linear function. As the gradient forces of large subpopulations persist during training, the initially learned linear function is retained, to the point of zero training error (Nakkiran et al., 2019). As a result, large subpopulations dictate the rapid initial change of the NTK and the prominent features learned in this phase. On the other hand, smaller subpopulations have a smaller influence on the model, and require a larger number of training iterations to be learned by the higher-complexity functions that are shaped later during the training.
When the spurious bias is strong, the initial linear function is dictated mainly by the spurious feature, and persists during training. This provides a good training and generalization error on the large groups, and thus prevents learning their core features . On the other hand, on the small subpopulations, functions with much higher complexity overfit and memorize the minorities. Such functions result in a small training error but a poor worst-group generalization performance on the minorities. This further explains the observation by Sagawa et al. (2020) showing that overparameterized models memorize the spurious feature and overfit the minorities. Effectively, the spurious features in large subpopulations prevent the model from learning the core features from the data.
Fig. 2a shows that the network’s prediction early in training can be well explained by a linear classifier (red). Besides, the network trained on large subpopulations can be well explained by the same
linear classifier (purple). This confirms that early training dynamics are dictated by large subpopulations. We also see that the behavior of the network trained on the subset selected by our method (discussed in Section 4.3) cannot be explained by the same linear model (brown). Fig. 2b shows that the linear model explaining the network trained on full data (blue) is similar to the linear model explaining the network trained on large subpopulations (orange), but different than the one explaining the network trained on our chosen subset (green). We see that the performance of the network can be well explained by a linear mode, and the linear model fitted on large subpopulations closely matches the one fitted on the entire data. These results further confirm that the linear classifiers fitted to entire data and large subpopulations are effectively the same during the initial training epochs.
4.2 FINDING THE LARGE SUBPOPULATIONS IN EVERY CLASS
The first question we aim to answer is how to find the large subpopulations of the data, without having such labels. As discussed, larger subpopulations insert a large gradient force on the model, and are learned during the initial epochs. When an example is learned, its gradient becomes nearly zero. Hence, every example has a gradient trajectory interpolating between its gradient at initialization and zero. Subpopulations that affect the model similarly have a similar gradient trajectory during training. Therefore, large subpopulations with similar gradient trajectories can be identified based on their gradient trajectory during the first few epochs.
To find the large subpopulations, we cluster the gradient trajectories during the initial epochs of training. As gradients are very high-dimensional, we first reduce the gradient dimensionality to better find the clusters. To do so, we rely on the following observation: for neural networks, the variation of the gradient norms is mostly captured by the gradient of the loss w.r.t. the input to the last layer of the network (Katharopoulos & Fleuret, 2018). The above lower-dimensional gradients can be efficiently computed in a closed form, and has been used as a gradient proxy in several recent works (Mirzasoleiman et al., 2020; Paul et al., 2021; Pooladzandi et al., 2022). Formally, for every example i we build its gradient trajectory by concatenating the lower-dimensional gradients during the first t training epochs, i.e.,
∇0:tf l(xi, yi) = [∇f l(f(w0,xi), yi),∇f l(f(w1,xi), yi), · · · ,∇f l(f(wt,xi), yi)], (6)
where∇f l(f(wj ,xi), yi) is the gradient of the loss w.r.t. the input to the last layer of the network at epoch j for training example (xi, yi). Note that as the gradient of an example depends on its label, examples from different classes do not have a similar gradient. Hence, we find similar gradient trajectories from every class separately.
Next, we cluster gradient trajectories to find the large subpopulations in every class. While any clustering algorithm can be used, we use the k-medoids objective to find the clusters efficiently. In particular, for 0 < κ < 1, we partition a class indexed by Vc ⊆ V to kc = κ · |Vc| subpopulations, by first finding the set Sc of its kc most centrally located gradient trajectories (medoids) by solving:
S∗c ∈ argmax S⊆Vc, |S|≤kc F (S) s.t. F (S) := ∑ i∈Vc max j∈Sc (cnt− ∥∇0:tf l(xi, yi)−∇0:tf l(xj , yj)∥), (7)
Algorithm 1 Training without Bias Input: Model f , initial epoch number t, subset fraction κ Output: Model f trained without bias
1: Train the model f for t epochs fromw0 and save gradient trajectories∇0:tf l(xi, yi) for all i ∈ V 2: for every class Vc do 3: Sc ← ∅ 4: for i = 1, 2, · · · , κ · |Vc| do 5: j ∈ argmaxe∈V \ScF (e|Sc) 6: Sc = Sc ∪ {j} 7: for i ∈ |Sc| do 8: Vc,i = {j ∈ Vc|i = argmin∥∇lf (w,xi)−∇lf (w,xj)∥ 9: for j ∈ V do
10: wj = |Vc,i| s.t. j ∈ Vc,i 11: pi=u 1/wi i s.t. ui∈(0, 1) is a uniform random number 12: S = {r examples with the largest pi} 13: Train the model f from w0 on S
where cnt is a large constant. Then to find the subpopulations, we assign every example to the medoid j ∈ S with the most similar trajectory. This partitions examples in class Vc to kc subpopulations Vc = {Vc,1, · · ·, Vc,kc}, where Vc,j = {i∈Vc|j=argmins∈Sc∥∇ 0:t f l(xi, yi)−∇0:tf l(xr, ys)∥}.
The maximization problem (Eq. (7)) is NP-hard. However, since the k-medoids objective is monotone and submodular1, a near-optimal solution of size k can be found efficiently in O(|V | · k) time. For maximizing a monotone submodular function, the greedy algorithm provides a (1 − 1/e) approximation guarantee (Wolsey, 1982). The greedy algorithm starts with the empty set S0 = ∅, and at each iteration l, chooses an element e∈V such that Sl = Sl−1 ∪ {argmaxe∈V F (e|Sl−1)}.
4.3 BALANCING THE SUBPOPULATIONS
To alleviate the bias of the large subpopulations and enable effective learning of core features, we aim to prune the large gradient trajectory clusters formed in initial epochs. This prevents the initial linear model from being biased toward the large subpopulations. In doing so, we allow the initial linear model to capture the complexity in different subpopulations, and learn the core features instead of the spurious features of the majorities. Hence, the model obtains a better generalization performance on minorities and out-of-distribution data. However, this should be done carefully as over-pruning the large subpopulations prevents them from participating in forming the initial model. This drastically harms the in-distribution generalization performance of the model.
To address this, we employ an importance sampling method on the union of the subpopulations of all classes, to select every example by probability equal to the inverse of the size of the subpopulation it belongs to. In particular, we weigh every example i ∈ Vc,j by the size of the cluster j ∈ Sc it belongs to, i.e., wi = |Vc,j |. Then, we use the algorithm of Efraimidis & Spirakis (2006) to select a sample with probabilities equal to pi=1/wi, without replacement. The sampling procedure works as follows. For each example i in the dataset, we independently generate a uniform random number ui∈(0, 1) and calculate qi=u1/wii . Examples that possess the r largest qi form the final subset S. Our sampling method biases the sample selection towards the smaller subpopulations, and drops many examples from the larger subpopulations. However, it still preserves the patterns in larger subpopulations, by including a smaller number of their examples in the sample. Effectively, our method balances the gradient forces between different subpopulations. This increases the strength of the core gradient vs. the spurious gradient. In doing so, it allows different subpopulations to participate in forming the initial linear model and dictate a more generalizable basin in which the model can be further fine-tuned. Hence, it enables better learning of the core features.
The pseudocode is illustrated in Algorithm 1.
1A set function F : 2V → R+ is submodular if F (e|S) = F (S ∪ {e})− F (S) ≥ F (T ∪ {e})− F (T ), for any S ⊆ T ⊆ V and e ∈ V \ T . F is monotone if F (e|S) ≥ 0 for any e∈V \S and S ⊆ V .
4.4 EFFECT OF PRUNING ON EARLY NETWORK EVOLUTION
Next, we take a closer look at the effect of our method on the evolution of the model. In particular, we show that training on the subset S selected by our method decreases the speed of learning on large subpopulations and lets the other groups have a larger contribution to the initial phase of learning.
When the model is trained on the subset DS =(XS , yS), the weight evolution over one step can be written as ∆Swt = −∇L(wt,XS) = −ηJ (wt,XS)T∇f l(f(wt,XS), yS). (8) Furthermore, the network evolution can be approximated using a first-order Taylor expansion, i.e.,
∆Sf(wt,X ) = J (wt,X )∆Swt = −ηJ (wt,X )J (wt,XS)T∇f l(f(wt,XS), yS) (9) = −ηΘt(X,XS)∇f l(wt, (XS , yS)), (10)
where Θt(X,XS) = J (wt,X )J (wt,XS)T is the empirical neural tangent kernel, describing the evolution of the network when training only on the subset S. The following Lemma quantifies the effect of pruning the large subpopulation on the model evolution at one training step.
Lemma 4.1 Training on the subset S sampled from ζ subpopulations found by our method, with learning rate η ≤ 1/∥J (wt,X )∥ changes the predictions of the model at every step by at most: ∥∆f(wt,X )−∆Sf(wt,X )∥ = η∥Θt(X,X )∇f l(X,wt)−Θt(X,XS)∇f l(X,wt)∥ (11)
≤ ∑ z∈[ζ] |α′z − αz| · ∥max j∈Vz ∇l(f(wt,xj), yj)∥, (12)
where αz = |Vz| is the size of subpopulation Vz , and α′z = |Vz ∩ S| is its size in the subset S.
The proof can be found in Appendix A.1.
Lemma 4.1 upper-bounds how training on the subset found by our method changes the effect of different subpopulations on the model predictions. When the subpopulations are approximately balanced, we have α′z ≈ καz . Thus, training on the subset S yields similar network evolution to that of the full data, and only scales down the learning rate. However, when subpopulations are imbalanced, it effectively decreases the gradient force of large subpopulations by |αz − α′z| · ∥maxj∈Vz ∇f l(f(wt,xj), yj)∥. Effectively, this reduces the speed of learning and bias of such subpopulations on the model. On the other hand, our importance sampling method preserves the small subpopulations, i.e., αz ≈ α′z and maintains their original gradient force on the model. Therefore, our subset balances the gradient forces and let different subpopulations participate in forming the lower-complexity models in the initial epochs. As Fig. 3 shows, individual examples in large subpopulations have a smaller gradient norm. Hence, a larger number of them can be pruned without significantly affecting the model. However, entirely dropping the large subpopulations have a larger cumulative effect compared to dropping a smaller number of examples in small subpopulations with larger norms. Hence, it drastically harms the in-distribution performance.
5 EXPERIMENTS
In this section, we evaluate the effectiveness of our method in assisting neural networks to learn better features. In particular, we consider the following two scenarios. First, we apply our method to improve the worst-group generalization performance, when training data contains spurious correlation. Then, we consider the application of our method to improve out-of-distribution performance, under distribution shift. In both cases, we also compare the in-distribution generalization performance of the networks trained on our subset vs full training data.
5.1 WORST-GROUP GENERALIZATION IN PRESENCE OF SPURIOUS CORRELATION
First, we evaluate the worst-group generalization performance of a model trained on our subset vs full data, in presence of spurious correlation. We record gradient trajectories during the initial 4 epochs and select 10% training examples as the subset. The reported results are averaged over 3 runs. Datasets & Models. We apply our method to the Colored-MNIST and Waterbirds datasets. The Colored-MNIST dataset is a synthetic dataset derived from MNIST (LeCun et al., 1998). It was first proposed in (Alain et al., 2015) as a binary classification task that contains spurious correlations— the grey-scale digits are changed to colors that are strongly correlated with the labels. We use a 5- layer CNN with 2 convolutional layers and 3 fully-connected layers. The Waterbirds dataset is introduced by Sagawa et al. (2019) to study the spurious correlation between the background and the foreground in image recognition. Species in Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset (Wah et al., 2011) are grouped into two classes, waterbirds and landbirds. All birds are then cut and pasted onto new background images, with waterbirds more likely to appear on water and landbirds having a higher probability on land. There are 4795 training examples in total, 3498 for landbirds with land background, 184 for landbirds with water background, 56 for waterbirds with land background, and 1057 for waterbirds with water background. We use a pretrained ResNet-50 model. Baselines. Empirical risk minimization (ERM) trains on all data, Random selects a subset uniformly at random, Upweight weights every example by the inverse of the group size, Balanced samples an equal number of examples from different groups. Ablation. To show the failure mode of random sampling when the majority has imbalanced subpopulations, we modify the dataset to make it more imbalanced, by pruning smaller clusters. Evaluation metrics. We use two metrics proposed in Sagawa et al. (2019), namely worst-group accuracy and adjusted average accuracy. Worst-group accuracy is the minimum accuracy across all groups, and Adjusted average accuracy is the average accuracy over groups weighted by their size. Results. Table 1 shows that the models trained on subsets found by our method obtain the highest worst-group and in-distribution test accuracy, when compared with baselines that do not require group labels. Besides, our method achieves a comparable performance to those that use the group information, and even outperforms them on the Waterbird dataset. We note that having group labels is not available in real-world datasets. Methods that do not rely on group labels, including our method, do not require knowing the minority groups. Therefore, they are more practical in realistic settings.
GradCam. Fig. 4 demonstrates GradCAM (Selvaraju et al., 2017) visualizations depicting saliency maps for samples from the Waterbirds dataset with water and land backgrounds. Warmer colors denote higher saliency, suggesting that the model considered these pixels more important in making the final classification measured by gradient activations. We see that the subset found by our method allows the model to learn the core features much better than ERM and Random baselines.
5.2 OUT-OF-DISTRIBUTION GENERALIZATION UNDER DISTRIBUTION SHIFT
Next, we empirically evaluate the in-distribution. and out-of-distribution performance of our method under distribution shift. The results are based on 3 independent runs, each with a different mini-batch order and initial parameter values. We record gradient trajectories during the initial 4 epochs. Datasets. We apply our method to CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), and Caltech-256 (Griffin et al., 2007). In particular, we keep the number of training iterations fixed (78k for CIFAR-10 and CIFAR-100, and 4.8k for Caltech-256) as we vary the size of the subset. Baselines. We compare our method with Random sampling, and the state-of-the-art baselines for in-distribution data pruning, based on EL2N (Paul et al., 2021), forgetting scores (Toneva et al., 2018). The EL2N score of a training example i is defined as E∥∇f l(w,xi) − yi∥2. We calculate EL2N after 20 epochs of training and average it over 10 different runs, as this is shown to be the most accurate. Forgetting score of an example is the number of times the examples are misclassified after being correctly classified during the entire training. We calculate the number of forgetting events for each training example by averaging over 5 runs of 200 epochs, as suggested by Toneva et al. (2018). Results. Fig. 5 (a), (b), (c) show that on different datasets, training on the subset selected by our method gives much higher in-distribution test accuracy than Random, and EL2N or forgetting scores particularly when the subset is small. Note that El2N and forgettability baselines use more information over many training epochs and multiple runs. Importantly, Fig. 5 (d) confirms that our method outperforms the baselines on CIFAR-10C (Hendrycks & Dietterich, 2019), with distribution shift. We train on our downsampled CIFAR-10 training set, and test on CIFAR-10-C (Hendrycks & Dietterich, 2019), a collection of OOD test sets for CIFAR-10. For each corruption type, we report the average test accuracy over 5 different intensity levels. Our method can achieve at least 2% higher test accuracy than other baselines. For some corruption types (Gaussian noise, shot noise, and impulse noise), our performance is even on par with or surpasses training on the full data.
6 CONCLUSION
We showed that larger subpopulations containing spurious biases prevent learning high-quality features. We showed that large subpopulations can be identified by tracking gradient trajectory of examples in initial epochs. Then, we proposed an importance sampling method to balance the subpopulations and ensure inclusion of representative examples from all the subpopulations. Our experiments confirmed the effectiveness of our approach in eliminating spurious biases and learning higher-quality models with superior in- and out-of-distribution performance on various datasets.
A APPENDIX
A.1 PROOF OF LEMMA 4.1
The logits evolution at one step can be written as:
∥∆f(wt,X )−∆Sf(wt,X )∥ = η∥Θt(X,X )∇f l(X,wt)−Θt(X,XS)∇f l(X,wt)∥ (13) = η∥J (wt,X )J (wt,X )T∇f l(X,wt)−J (wt,X )J (wt,XS)T∇f l(XS ,wt)∥
(14)
≤ η∥J (wt,X )∥ · ∥J (wt,X )T∇f l(X,wt)− J (wt,XS)T∇f l(XS ,wt)∥ (15) ≤ ∥ ∑ i∈V ∇l(f(wt,xi))− ∑ j∈S ∇l(f(wt,xj))∥ (16)
≤ ∥ ∑ z∈[ζ] ∑ j∈Vz ∇l(f(wt,xj), yj)∥ (17)
≤ ∑ z∈[ζ] ∑ j∈Vz ∥∇l(f(wt,xj), yj)∥ (18)
≤ ∑ z∈[ζ] |α′z − αz| · ∥max j∈Vz ∇l(f(wt,xj), yj)∥, (19)
where Eq. equation 14 holds because η ≤ 1/J (wt,X ).
A.2 EXPERIMENTATION DETAILS
A.2.1 DATASETS
CMNIST We construct a colored MNIST dataset with spurious correlations by using colors as the spurious attributes as the following. First, we define an image classification task with 5 classes by mapping every 2 consecutive digits (0 and 1, 2 and 3, 4 and 5, 6 and 7, 8 and 9) into the same class. We use the official test split of MNIST, randomly select 50k examples from the train split as the training set, and then use the rest 10k samples in the train split as the validation set.
Then, for each class yi, we color the foreground of pcorr,i fraction of training examples with color ai from the set of colorsA={#ff0000, #85ff00, #00fff3, #6e00ff, #ff0018} represented by their hex codes. We call this fraction of data the majority group of class yi. The higher the pcorr,i, the stronger the spurious correlation between the class yi and the spurious attribute ai. For the rest 1−pcorr,i training examples, we color them with a random color fromA\ai. In Fig. 6, we visualize examples in 5 classes with 5 colors and highlight the majority groups with white bounding boxes. In our experiments, we used pcorr = [0.995, 0.95, 0.9, 0.8, 0.6] to construct spurious correlations with different strengths and groups with different sizes.
A.3 ADDITIONAL EXPERIMENTS | 1. What is the focus and contribution of the paper on identifying subpopulations within classes?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its effectiveness and limitations?
3. Do you have any concerns about the paper's experimental design and results, such as the choice of architecture, hyperparameters, and datasets?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions or recommendations for improving the paper, such as expanding the analysis of the obtained subpopulation clusters or providing more detail on the downsampling process? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors address the problem that, in a dataset, examples within classes can be grouped into subpopulations that are similar to each other, and this influences the dynamics. They propose an intuitive solution, which does not require previous knowledge on the subpopulations. This solution consists of looking at the gradients related to the single during the initial stages of learning. These gradients are expected to be similar for elements belonging to a single subpopulation, and this allows to identify them through a simple clustering algorithm. The authors test the method on two tasks: worst-group generalization in datasets with spurious correlations, and out-of-distribution generalization under distribution shift. They find promising results in their experiments.
Strengths And Weaknesses
Strengths:
The idea of selecting subpopulations based on the gradients during the initial phase of learning is a very nice effective way of understanding data. This could even be expanded to make static studies of datasets.
The proposed method seems to outperform previous similar methods
Weaknesses (and further open questions that could/should be addressed in the paper):
The argument of section 4.1 leverages on an intuition provided by Nakkiran 2019, the knowledge that minority examples are learned later (which is known, see e.g. arXiv:2104.01769 and arXiv:2207.00391), and the empirical evidence provided in Fig.2. This evidence is provided for a single architecture on a single data set, which is clearly too little to make a general claim.
The paper contains mainly empirical results, but the results in section 4 are shown only with a single model.
The results in section 5.1 are obtained on one single model and simple datasets. Why not use e.g. the superclasses of cifar-100, with the single classes representing the single populations? In that case there would be no explicitly-imposed spurious correlation, but they can be implicit (i.e. not imposed by hand) and in any case I don't fully understand the need of imposing clear sources of spurious correlations to validate the algorithm (beyond that they allow to show in fig4 that some times spurious correlations are avoided). If the algorithm is meant to help in generic application cases, then I would assume that its efficacy would be visible for any data set, and not only very specific ones, right?
There is no analysis of the obtained subpopulation clusters. It is not clear whether the result of the clustering depends on the specific choice of architecture and HyperParameters. Do I obtain the same populations if I change architecture or hyperparameters? Are these found subpopulations actually significant? Analyzing them could confirm whether the arguments leading to the method are correct, or if different explanations should be sought.
As I understand from Alg.1 (this was not too clear to me from the main text), the subpopulations are collected in an initial dummy run that lasts t steps. How is t chosen and how do the clusters/performances change with t?
The authors select given examples with a probability p_i which depends on the subpopulation [page 6]. What would happen if they didn't use these weights for selection, but rather for reweighting of the single examples?
Relatedly, the suggested recipe essentially consists in downsampling the dataset in a smart way, and using a subset S of data instead of the full dataset. One could instead use the probabilities p_i to select the minibatches, and assign examples to a batch depending on p_i. I assume that this would have the same effect as downsampling the dataset, but with the advantage that no example is completely discarded, right?
The arguments given by the authors focus on the initial linear model that is created in the first steps of the dynamics. Does this mean that algorithm 1 is not necessary after the beginning of the dynamics?
In Fig.3 it is shown that the norms of the examples belonging to the largest clusters are smaller. Going further, shouldn't one be looking for each cluster to have the same total norm (i.e. GradNorm*ClusterSize=constant)?
Section 5: why are there no error bars for CMNIST?
Section 5.1: how do the clusters found through the authors' method overlap with the real clusters?
The following statement is not supported by the experiments and should be restated: "training on the subset selected by our method gives much higher in-distribution test accuracy than Random, and EL2N or forgetting scores particularly when the subset is small". I would rather say that for high pruning -when the subset is small- this is true (though the adjective "high" is arbitrary: I would rather say that this is "consistent" or "systematic" through datasets). For low pruning this is not even true. Furthermore, the x axes of figures 5a and 5c are skipping the 0.4 point, giving more visual importance to the highly-pruned runs (where the authors' method outperforms the others) than they actually should have. I hope this was not intentionally done to give a false perception of the results.
From Figure 5, I see that using the whole dataset is still better than using a subset. Therefore, it would seem that the title ("When majorities prevent learning") is misleading, since keeping the majorities is still good.
Beyond the metrics used in table 1, why is the bare accuracy or macro-F1score not reported? As a reader I would be interested in knowing how the authors' method compares to a vanilla training as far as the traditional metrics are concerned. If "majorities prevent learning", do I get an advantage on the global by removing these majorities? Do I lose something? How much? I think these questions should be addressed.
Clarity, Quality, Novelty And Reproducibility
Clarity:
x) The language is ok.
x) The structure and flow are ok.
x) I would suggest a more intuitive color code (or line style) in Fig.2, relating the couples of runs with the same data.
x) Label sizes in figure 5 are too small
x) The topic of the spurious correlations is brought up for some hand-waving arguments and for justifying some experiments, but those are never actually used for any theory, so I don't see the need of defining the attributes
a
i
.
x) The appendix lacks text describing the figures.
x) The description of Fig.6 in page 4 is confusing. I am not even sure that Figure 6 has the content that is being described.
x) There are many typos in the text, I suggest a careful rereading. Here are some examples:
forgettabiliy
Spurious features has been
neural network experience
When an exampleS is learned
and has been used [should be "have"]
for every exampleS
we independently generateS
and dropS many examples
Subset found by our method allow
in-distribution. and out-of-distribution
Quality:
The paper is based on an idea that I find nice, but I feel that the paper could have developed and explored it more in depth.
Novelty:
The main message of the paper is a recipe to identify subpopulations within each class, and to exploit this knowledge for better training. As far as I know, this is novel.
Reproducibility:
The code is not provided, hyperparameter tuning is not explained (nor is it stated how the HPs related to the authors' method should be chosen or how they influence the performance) and the details of the models are not given.
I don't fully understand how the out of distribution experiments were brought through. |
ICLR | Title
When Majorities Prevent Learning: Eliminating Bias to Improve Worst-group and Out-of-distribution Generalization
Abstract
Modern neural networks trained on large datasets achieve state-of-the-art (indistribution) generalization performance on various tasks. However, their good generalization performance has been shown to be contributed largely to overfitting spurious biases in large datasets. This is evident by the poor generalization performance of such models on minorities and out-of-distribution data. To alleviate this issue, subsampling the majority groups has been shown to be very effective. However, it is not clear how to find the subgroups (e.g. within a class) in large real-world datasets. Besides, naively subsampling the majority groups can entirely deplete some of their smaller sub-populations and drastically harm the in-distribution performance. Here, we show that tracking gradient trajectories of examples in initial epochs allows for finding large subpopulations of data points. We leverage this observation and propose an importance sampling method that is biased towards selecting smaller subpopulations, and eliminates bias in the large subpopulations. Our experiments confirm the effectiveness of our approach in eliminating spurious biases and learning higher-quality models with superior inand out-of-distribution performance on various datasets.
1 INTRODUCTION
Large datasets have enabled modern neural networks to achieve unprecedented success on various tasks. Large datasets are, however, often heavily biased towards the data-rich head of the distribution (Le Bras et al., 2020; Sagawa et al., 2020; 2019). That means, there are large groups of potentially redundant data points belonging to majority subpopulations, and smaller groups of examples representing minorities. Larger groups often contain spurious biases, i.e., unintended but strong correlations between examples (e.g. image background) and their label. In such settings, overparameterized models learn to memorize the spurious features instead of the core features for the majority, and overfit the minorities (Sagawa et al., 2020). As a result, despite their superior performance on in-distribution data, overparameterized models trained on biased datasets often have a poor worst-group and out-of-distribution generalization performance.
To improve the high worst-group error and of out-of-distribution generalization, techniques such as distributionally robust optimization (DRO), or up-weighting the minority groups are commonly used (Sagawa et al., 2019; 2020). However, such methods have been shown to be highly ineffective for overparameterized models in the presence of spurious features (Sagawa et al., 2020). When majority groups are sufficiently large and the spurious features are strong, overparameterized models choose to exploit the spurious features for the majorities and memorize the minorities, as it entails less memorization on the entire data. In this setting, upweighting minorities only exacerbates spurious correlations, and subsampling the majorities has been advocated for (Sagawa et al., 2020). But, this requires the groups to be specified beforehand, which is not available for real-world datasets. Besides, random subsampling of the majority groups can entirely deplete some of their subpopulations and drastically harm the in-distribution performance (Toneva et al., 2018; Paul et al., 2021).
In this work, we propose an effective way to find large subpopulations of examples (see Fig. 1), and subsample them to ensure inclusion of representative examples from all the subpopulations. We rely on the following recent observations. In the initial training epochs, the network learns important
features and the NTK undergoes rapid changes, which determine its final basin of convergence (Fort et al., 2020). This results in learning a linear function during the initial epochs, followed by learning functions of increasing complexity (Nakkiran et al., 2019). We show that large subpopulations are responsible for forming the initial linear model, by inserting large gradient forces in the first few epochs. The minorities, on the other hand, dictate the higher-complexity functions later in training. To find the large subpopulations, we track the gradient trajectories—the way the gradient changes— during initial training epochs. Then, we cluster similar gradient trajectories together, and employ an importance sampling method that samples data points from every cluster by a probability equal to the inverse of the size of the cluster it belongs to. This allows selecting a balanced subset from different clusters. By studying the effect of our method on the evolution of the model early during the training, we show that our method allows the model to better learn from all the subpopulations by balancing the gradient forces between different groups. This enables learning higher-quality features.
Our empirical studies confirm the effectiveness of our method in improving the worst-group and out-of-distribution generalization, while enjoying a superior in-distribution performance even when the size of the selected sample is small. Notably, on CMNIST (Alain et al., 2015) and Waterbird (Sagawa et al., 2019) datasets which contain strong spurious biases, our method achieves a comparable or even better performance than the state-of-the-art methods, which rely on the underlying group information to uniformly subsample the majority group. In addition, on CIFAR10, CIFAR100 (Krizhevsky et al., 2009), and Caltech256 (Griffin et al., 2007) our method provides a superior indistribution performance to state-of-the-art data pruning methods, based on forgettability (Toneva et al., 2018) and El2N (Paul et al., 2021) scores, especially for small subsets. At the same time, it outperforms such methods on out-of-distribution data, CIFAR10C (Hendrycks & Dietterich, 2019).
2 RELATED WORK
Data pruning for worst-group generalization. To improve the generalization performance on minorities, preventing the model from learning spurious features is very helpful (Sagawa et al., 2019; 2020). For overparameterized models, randomly subsampling the majorities has been shown to be the most effective (Sagawa et al., 2020) than distributionally robust optimization (DRO) (Sagawa et al., 2019) and up-weighting the minority groups (Sagawa et al., 2020). However, this requires the group labels to be specified beforehand, which is not available for large real-world datasets. Besides, if the majority contains imbalanced subpopulations, random subsampling inherits similar biases. Finally, random subsampling of the majority groups can entirely deplete some of their smaller subpopulations and drastically harm the in-distribution performance, as we empirically show.
A different line of work (Sohoni et al., 2020; Nam et al., 2020; Ahmed et al., 2020; Liu et al., 2021; Creager et al., 2021; Taghanaki et al., 2021; Zhang et al., 2022; Nam et al., 2021) studies how to improve worst-group generalization without having access to group labels. These methods require training a model first to minimize the average empirical risk before training the robust model, which doubles the training time and is thus also not practical for large real-world datasets.
Data pruning for OOD generalization. Spurious features have been shown to also harm the outof-distribution generalization (Le Bras et al., 2020). To alleviate this, Swayamdipta et al. (2020) proposed to train on the subset of most ambiguous instances whose true class probabilities fluctuate frequently during training, and Le Bras et al. (2020) employed AFLite (Le Bras et al., 2020) to iteratively filter highly-predictable examples by training multiple linear classifiers on different random partitions of the data. However, such methods drastically harm the in-distribution performance.
Data pruning for in-distribution generalization. The main idea behind all data pruning methods is to define a notion of example difficulty, and prune the easy-to-learn examples. Notably, Coleman et al. (2020) used a smaller trained proxy model to find the most uncertain examples to train a larger model. Toneva et al. (2018) defined a forgetting event of an example as transitioning from being classified correctly to incorrectly during training, and drop the examples with no forgetting events. Most recently, Paul et al. (2021) dropped examples with the lowest average errors (EL2N) recorded early in training and averaged over several initializations. The above heuristics require full or partial training of multiple models, can only drop a relatively small fraction of the examples, and hurt the out-of-distribution performance, as we show in our experiments. In contrast, our method can successfully alleviate bias and achieve a superior in- and out-of-distribution performance.
3 PROBLEM FORMULATION
Training machine learning models is often reduced to minimizing an empirical risk function (ERM). That is, the goal is to find the parameter w∗ that minimizes the average error on the entire training data D = (X,y) = {(xi, yi)}i∈V , where V = {1, · · · , n} indexes the training data. Formally,
w∗ = argminw∈WL(w), L(w) = E(xi,yi)∈D[l(f(w,xi), yi))], (1)
wherew is the model parameter, and f(w,xi) and l(f(w,xi), yi)) are the output of the network and the value of the loss associated to a training example (xi, yi), respectively. For large datasets, the average error L is minimized by applying (Stochastic) Gradient Descent with learning rate η starting from a random initial point w0:
wt+1 = wt − η∇L(wt), ∇L(wt) = J (wt,X )T∇f l(f(wt,X ), y), (2)
where y = {yi}ni=1, X = {xi}ni=1, and J (w,X ) ∈ Rn×m is the Jacobian matrix associated with the nonlinear network f : Rd → Ro defined as
J (w,X ) = [∂f(w,x1)
∂w · · · ∂f(w,xn) ∂w
]T , (3)
and ∇f l(f(wt,XS), yS) is the gradient of the loss w.r.t. the network. Furthermore, Θt(X,X ) = J (wt,X )J (wt,X )T is the empirical neural tangent kernel (NTK) (Jacot et al., 2018; Du et al., 2018), describing the evolution of the network during training by gradient descent.
Spurious features and majority groups. We consider a similar setting with (Sagawa et al., 2019; 2020), where each training example (xi, yi) is associated with a spurious attribute ai that is correlated with its label yi. The examples with the same spurious attribute and label make a group gj,k ∈ G, where gj,k = {(xi, yi)|i ∈ V, ai = j, yi = k}. The groups which contain considerably more examples than the rest are referred to as majority groups. For example, in the Waterbirds dataset (Sagawa et al., 2019), every example (xi, yi) belongs to one of the 2 classes, yi ∈ {waterbird, landbird} and the image background ai ∈ {water background, land background} is spuriously correlated with the label yi. Thus, there are four groups of examples associated with every combination of spurious attribute and label, i.e., G ={(waterbird, water background), (waterbird, land background), (landbird, water background), (landbird, land background)}. The majority groups are (waterbird, water background), and (landbird, land background). Importantly, in this work, we assume that the groups and spurious attributes are not known at training time.
Subpopulations. Every dataset can be partitioned into s different subpopulations of examples that are similar in terms of their effect on training, i.e. the indices of the training data V can be partitioned into V = {V1, · · · , Vs}. For a formal definition, see Section 4.2. Note that subpopulations may represent a finer clustering compared to group clustering. Fig. 1 shows an illustration of groups vs. subpopulations for Waterbird dataset. We develop a method that automatically clusters the data and identifies large subpopulations.
Objective. Our goal is to find a subset S ⊆ V of size r = |S| from all training examples indexed by V , such that training on the subset alleviates the effect of spurious biases and improves (1) the worst-group generalization when the groups are imbalanced, or (2) out-of-distribution generalization under distribution shift. In both cases, we aim to preserve a good performance on the in-distribution data. In particular, the worst-group error is defined as,
Errwg = max g∈G Exi,yi|g[yi ̸= yf (w,xi)], (4)
where yf (w, (xi, yi) is the label predicted by the model. In other words, Errwg measures the highest fraction of examples that are incorrectly classified across all groups. Similarly, the out-of distribution (OOD) performance measures the performance of the model f trained on the training set D, and tested on D′ = (c(X ), y), when c is from a set of shifting functions C. Formally,
Errood = E(xi,yi)∈D′ [yi ̸= yf (w,xi)], (5)
measures the fraction of examples that are misclassified when (xi, yi) is drawn i.i.d. from D′.
4 ELIMINATING BIAS IN THE DATA
In this section, we present our main results. We start by discussing the effect of large subpopulations on early learning dynamics. Then, we explain how gradient trajectories of examples during the initial training epochs allow finding the large subpopulations. Next, we employ importance sampling to find a subset that contains a similar number of examples from subpopulations. Finally, we study how the subset found by our method affects the network’s early learning dynamics.
4.1 EFFECT OF LARGE SUBPOPULATIONS ON EARLY LEARNING DYNAMICS
Recent empirical studies on neural networks’ training dynamics show that in the initial epochs the performance of a network trained by SGD can be explained by a linear classifier. Formally, if F and L are the corresponding random variables for the neural network and a linear model respectively, the mutual information between F and y conditioned on L, I(F ;y|L), captures the part of F ’s success on the prediction of y in addition to L. Then the performance correlation between F and L, µy(F ;L) := I(F ;y) − I(F ;y|L), is the part of F ’s success on the prediction of y that can be explained by L. Nakkiran et al. (2019) show that there exists T0 such that µy(Ft;L) ≈ I(Ft;y) at training step t for all t < T0. As training progresses, the network learns functions of increasing complexity (Nakkiran et al., 2019). Furthermore, Fort et al. (2020) show that during the first few epochs of training, neural network experience a rapid initial transient which determines the final basin of convergence. During this period, the NTK changes very rapidly and learns useful features.
First, we empirically show that large subpopulations are responsible for forming the initial linear model in the first few training epochs. Effectively, during training every example contributes to minimizing the loss by its gradient. Examples with similar gradients insert a similar force on the model and affect the model in the same direction. In the first few epochs, large gradient forces of large subpopulations highly bias the initial linear function. As the gradient forces of large subpopulations persist during training, the initially learned linear function is retained, to the point of zero training error (Nakkiran et al., 2019). As a result, large subpopulations dictate the rapid initial change of the NTK and the prominent features learned in this phase. On the other hand, smaller subpopulations have a smaller influence on the model, and require a larger number of training iterations to be learned by the higher-complexity functions that are shaped later during the training.
When the spurious bias is strong, the initial linear function is dictated mainly by the spurious feature, and persists during training. This provides a good training and generalization error on the large groups, and thus prevents learning their core features . On the other hand, on the small subpopulations, functions with much higher complexity overfit and memorize the minorities. Such functions result in a small training error but a poor worst-group generalization performance on the minorities. This further explains the observation by Sagawa et al. (2020) showing that overparameterized models memorize the spurious feature and overfit the minorities. Effectively, the spurious features in large subpopulations prevent the model from learning the core features from the data.
Fig. 2a shows that the network’s prediction early in training can be well explained by a linear classifier (red). Besides, the network trained on large subpopulations can be well explained by the same
linear classifier (purple). This confirms that early training dynamics are dictated by large subpopulations. We also see that the behavior of the network trained on the subset selected by our method (discussed in Section 4.3) cannot be explained by the same linear model (brown). Fig. 2b shows that the linear model explaining the network trained on full data (blue) is similar to the linear model explaining the network trained on large subpopulations (orange), but different than the one explaining the network trained on our chosen subset (green). We see that the performance of the network can be well explained by a linear mode, and the linear model fitted on large subpopulations closely matches the one fitted on the entire data. These results further confirm that the linear classifiers fitted to entire data and large subpopulations are effectively the same during the initial training epochs.
4.2 FINDING THE LARGE SUBPOPULATIONS IN EVERY CLASS
The first question we aim to answer is how to find the large subpopulations of the data, without having such labels. As discussed, larger subpopulations insert a large gradient force on the model, and are learned during the initial epochs. When an example is learned, its gradient becomes nearly zero. Hence, every example has a gradient trajectory interpolating between its gradient at initialization and zero. Subpopulations that affect the model similarly have a similar gradient trajectory during training. Therefore, large subpopulations with similar gradient trajectories can be identified based on their gradient trajectory during the first few epochs.
To find the large subpopulations, we cluster the gradient trajectories during the initial epochs of training. As gradients are very high-dimensional, we first reduce the gradient dimensionality to better find the clusters. To do so, we rely on the following observation: for neural networks, the variation of the gradient norms is mostly captured by the gradient of the loss w.r.t. the input to the last layer of the network (Katharopoulos & Fleuret, 2018). The above lower-dimensional gradients can be efficiently computed in a closed form, and has been used as a gradient proxy in several recent works (Mirzasoleiman et al., 2020; Paul et al., 2021; Pooladzandi et al., 2022). Formally, for every example i we build its gradient trajectory by concatenating the lower-dimensional gradients during the first t training epochs, i.e.,
∇0:tf l(xi, yi) = [∇f l(f(w0,xi), yi),∇f l(f(w1,xi), yi), · · · ,∇f l(f(wt,xi), yi)], (6)
where∇f l(f(wj ,xi), yi) is the gradient of the loss w.r.t. the input to the last layer of the network at epoch j for training example (xi, yi). Note that as the gradient of an example depends on its label, examples from different classes do not have a similar gradient. Hence, we find similar gradient trajectories from every class separately.
Next, we cluster gradient trajectories to find the large subpopulations in every class. While any clustering algorithm can be used, we use the k-medoids objective to find the clusters efficiently. In particular, for 0 < κ < 1, we partition a class indexed by Vc ⊆ V to kc = κ · |Vc| subpopulations, by first finding the set Sc of its kc most centrally located gradient trajectories (medoids) by solving:
S∗c ∈ argmax S⊆Vc, |S|≤kc F (S) s.t. F (S) := ∑ i∈Vc max j∈Sc (cnt− ∥∇0:tf l(xi, yi)−∇0:tf l(xj , yj)∥), (7)
Algorithm 1 Training without Bias Input: Model f , initial epoch number t, subset fraction κ Output: Model f trained without bias
1: Train the model f for t epochs fromw0 and save gradient trajectories∇0:tf l(xi, yi) for all i ∈ V 2: for every class Vc do 3: Sc ← ∅ 4: for i = 1, 2, · · · , κ · |Vc| do 5: j ∈ argmaxe∈V \ScF (e|Sc) 6: Sc = Sc ∪ {j} 7: for i ∈ |Sc| do 8: Vc,i = {j ∈ Vc|i = argmin∥∇lf (w,xi)−∇lf (w,xj)∥ 9: for j ∈ V do
10: wj = |Vc,i| s.t. j ∈ Vc,i 11: pi=u 1/wi i s.t. ui∈(0, 1) is a uniform random number 12: S = {r examples with the largest pi} 13: Train the model f from w0 on S
where cnt is a large constant. Then to find the subpopulations, we assign every example to the medoid j ∈ S with the most similar trajectory. This partitions examples in class Vc to kc subpopulations Vc = {Vc,1, · · ·, Vc,kc}, where Vc,j = {i∈Vc|j=argmins∈Sc∥∇ 0:t f l(xi, yi)−∇0:tf l(xr, ys)∥}.
The maximization problem (Eq. (7)) is NP-hard. However, since the k-medoids objective is monotone and submodular1, a near-optimal solution of size k can be found efficiently in O(|V | · k) time. For maximizing a monotone submodular function, the greedy algorithm provides a (1 − 1/e) approximation guarantee (Wolsey, 1982). The greedy algorithm starts with the empty set S0 = ∅, and at each iteration l, chooses an element e∈V such that Sl = Sl−1 ∪ {argmaxe∈V F (e|Sl−1)}.
4.3 BALANCING THE SUBPOPULATIONS
To alleviate the bias of the large subpopulations and enable effective learning of core features, we aim to prune the large gradient trajectory clusters formed in initial epochs. This prevents the initial linear model from being biased toward the large subpopulations. In doing so, we allow the initial linear model to capture the complexity in different subpopulations, and learn the core features instead of the spurious features of the majorities. Hence, the model obtains a better generalization performance on minorities and out-of-distribution data. However, this should be done carefully as over-pruning the large subpopulations prevents them from participating in forming the initial model. This drastically harms the in-distribution generalization performance of the model.
To address this, we employ an importance sampling method on the union of the subpopulations of all classes, to select every example by probability equal to the inverse of the size of the subpopulation it belongs to. In particular, we weigh every example i ∈ Vc,j by the size of the cluster j ∈ Sc it belongs to, i.e., wi = |Vc,j |. Then, we use the algorithm of Efraimidis & Spirakis (2006) to select a sample with probabilities equal to pi=1/wi, without replacement. The sampling procedure works as follows. For each example i in the dataset, we independently generate a uniform random number ui∈(0, 1) and calculate qi=u1/wii . Examples that possess the r largest qi form the final subset S. Our sampling method biases the sample selection towards the smaller subpopulations, and drops many examples from the larger subpopulations. However, it still preserves the patterns in larger subpopulations, by including a smaller number of their examples in the sample. Effectively, our method balances the gradient forces between different subpopulations. This increases the strength of the core gradient vs. the spurious gradient. In doing so, it allows different subpopulations to participate in forming the initial linear model and dictate a more generalizable basin in which the model can be further fine-tuned. Hence, it enables better learning of the core features.
The pseudocode is illustrated in Algorithm 1.
1A set function F : 2V → R+ is submodular if F (e|S) = F (S ∪ {e})− F (S) ≥ F (T ∪ {e})− F (T ), for any S ⊆ T ⊆ V and e ∈ V \ T . F is monotone if F (e|S) ≥ 0 for any e∈V \S and S ⊆ V .
4.4 EFFECT OF PRUNING ON EARLY NETWORK EVOLUTION
Next, we take a closer look at the effect of our method on the evolution of the model. In particular, we show that training on the subset S selected by our method decreases the speed of learning on large subpopulations and lets the other groups have a larger contribution to the initial phase of learning.
When the model is trained on the subset DS =(XS , yS), the weight evolution over one step can be written as ∆Swt = −∇L(wt,XS) = −ηJ (wt,XS)T∇f l(f(wt,XS), yS). (8) Furthermore, the network evolution can be approximated using a first-order Taylor expansion, i.e.,
∆Sf(wt,X ) = J (wt,X )∆Swt = −ηJ (wt,X )J (wt,XS)T∇f l(f(wt,XS), yS) (9) = −ηΘt(X,XS)∇f l(wt, (XS , yS)), (10)
where Θt(X,XS) = J (wt,X )J (wt,XS)T is the empirical neural tangent kernel, describing the evolution of the network when training only on the subset S. The following Lemma quantifies the effect of pruning the large subpopulation on the model evolution at one training step.
Lemma 4.1 Training on the subset S sampled from ζ subpopulations found by our method, with learning rate η ≤ 1/∥J (wt,X )∥ changes the predictions of the model at every step by at most: ∥∆f(wt,X )−∆Sf(wt,X )∥ = η∥Θt(X,X )∇f l(X,wt)−Θt(X,XS)∇f l(X,wt)∥ (11)
≤ ∑ z∈[ζ] |α′z − αz| · ∥max j∈Vz ∇l(f(wt,xj), yj)∥, (12)
where αz = |Vz| is the size of subpopulation Vz , and α′z = |Vz ∩ S| is its size in the subset S.
The proof can be found in Appendix A.1.
Lemma 4.1 upper-bounds how training on the subset found by our method changes the effect of different subpopulations on the model predictions. When the subpopulations are approximately balanced, we have α′z ≈ καz . Thus, training on the subset S yields similar network evolution to that of the full data, and only scales down the learning rate. However, when subpopulations are imbalanced, it effectively decreases the gradient force of large subpopulations by |αz − α′z| · ∥maxj∈Vz ∇f l(f(wt,xj), yj)∥. Effectively, this reduces the speed of learning and bias of such subpopulations on the model. On the other hand, our importance sampling method preserves the small subpopulations, i.e., αz ≈ α′z and maintains their original gradient force on the model. Therefore, our subset balances the gradient forces and let different subpopulations participate in forming the lower-complexity models in the initial epochs. As Fig. 3 shows, individual examples in large subpopulations have a smaller gradient norm. Hence, a larger number of them can be pruned without significantly affecting the model. However, entirely dropping the large subpopulations have a larger cumulative effect compared to dropping a smaller number of examples in small subpopulations with larger norms. Hence, it drastically harms the in-distribution performance.
5 EXPERIMENTS
In this section, we evaluate the effectiveness of our method in assisting neural networks to learn better features. In particular, we consider the following two scenarios. First, we apply our method to improve the worst-group generalization performance, when training data contains spurious correlation. Then, we consider the application of our method to improve out-of-distribution performance, under distribution shift. In both cases, we also compare the in-distribution generalization performance of the networks trained on our subset vs full training data.
5.1 WORST-GROUP GENERALIZATION IN PRESENCE OF SPURIOUS CORRELATION
First, we evaluate the worst-group generalization performance of a model trained on our subset vs full data, in presence of spurious correlation. We record gradient trajectories during the initial 4 epochs and select 10% training examples as the subset. The reported results are averaged over 3 runs. Datasets & Models. We apply our method to the Colored-MNIST and Waterbirds datasets. The Colored-MNIST dataset is a synthetic dataset derived from MNIST (LeCun et al., 1998). It was first proposed in (Alain et al., 2015) as a binary classification task that contains spurious correlations— the grey-scale digits are changed to colors that are strongly correlated with the labels. We use a 5- layer CNN with 2 convolutional layers and 3 fully-connected layers. The Waterbirds dataset is introduced by Sagawa et al. (2019) to study the spurious correlation between the background and the foreground in image recognition. Species in Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset (Wah et al., 2011) are grouped into two classes, waterbirds and landbirds. All birds are then cut and pasted onto new background images, with waterbirds more likely to appear on water and landbirds having a higher probability on land. There are 4795 training examples in total, 3498 for landbirds with land background, 184 for landbirds with water background, 56 for waterbirds with land background, and 1057 for waterbirds with water background. We use a pretrained ResNet-50 model. Baselines. Empirical risk minimization (ERM) trains on all data, Random selects a subset uniformly at random, Upweight weights every example by the inverse of the group size, Balanced samples an equal number of examples from different groups. Ablation. To show the failure mode of random sampling when the majority has imbalanced subpopulations, we modify the dataset to make it more imbalanced, by pruning smaller clusters. Evaluation metrics. We use two metrics proposed in Sagawa et al. (2019), namely worst-group accuracy and adjusted average accuracy. Worst-group accuracy is the minimum accuracy across all groups, and Adjusted average accuracy is the average accuracy over groups weighted by their size. Results. Table 1 shows that the models trained on subsets found by our method obtain the highest worst-group and in-distribution test accuracy, when compared with baselines that do not require group labels. Besides, our method achieves a comparable performance to those that use the group information, and even outperforms them on the Waterbird dataset. We note that having group labels is not available in real-world datasets. Methods that do not rely on group labels, including our method, do not require knowing the minority groups. Therefore, they are more practical in realistic settings.
GradCam. Fig. 4 demonstrates GradCAM (Selvaraju et al., 2017) visualizations depicting saliency maps for samples from the Waterbirds dataset with water and land backgrounds. Warmer colors denote higher saliency, suggesting that the model considered these pixels more important in making the final classification measured by gradient activations. We see that the subset found by our method allows the model to learn the core features much better than ERM and Random baselines.
5.2 OUT-OF-DISTRIBUTION GENERALIZATION UNDER DISTRIBUTION SHIFT
Next, we empirically evaluate the in-distribution. and out-of-distribution performance of our method under distribution shift. The results are based on 3 independent runs, each with a different mini-batch order and initial parameter values. We record gradient trajectories during the initial 4 epochs. Datasets. We apply our method to CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), and Caltech-256 (Griffin et al., 2007). In particular, we keep the number of training iterations fixed (78k for CIFAR-10 and CIFAR-100, and 4.8k for Caltech-256) as we vary the size of the subset. Baselines. We compare our method with Random sampling, and the state-of-the-art baselines for in-distribution data pruning, based on EL2N (Paul et al., 2021), forgetting scores (Toneva et al., 2018). The EL2N score of a training example i is defined as E∥∇f l(w,xi) − yi∥2. We calculate EL2N after 20 epochs of training and average it over 10 different runs, as this is shown to be the most accurate. Forgetting score of an example is the number of times the examples are misclassified after being correctly classified during the entire training. We calculate the number of forgetting events for each training example by averaging over 5 runs of 200 epochs, as suggested by Toneva et al. (2018). Results. Fig. 5 (a), (b), (c) show that on different datasets, training on the subset selected by our method gives much higher in-distribution test accuracy than Random, and EL2N or forgetting scores particularly when the subset is small. Note that El2N and forgettability baselines use more information over many training epochs and multiple runs. Importantly, Fig. 5 (d) confirms that our method outperforms the baselines on CIFAR-10C (Hendrycks & Dietterich, 2019), with distribution shift. We train on our downsampled CIFAR-10 training set, and test on CIFAR-10-C (Hendrycks & Dietterich, 2019), a collection of OOD test sets for CIFAR-10. For each corruption type, we report the average test accuracy over 5 different intensity levels. Our method can achieve at least 2% higher test accuracy than other baselines. For some corruption types (Gaussian noise, shot noise, and impulse noise), our performance is even on par with or surpasses training on the full data.
6 CONCLUSION
We showed that larger subpopulations containing spurious biases prevent learning high-quality features. We showed that large subpopulations can be identified by tracking gradient trajectory of examples in initial epochs. Then, we proposed an importance sampling method to balance the subpopulations and ensure inclusion of representative examples from all the subpopulations. Our experiments confirmed the effectiveness of our approach in eliminating spurious biases and learning higher-quality models with superior in- and out-of-distribution performance on various datasets.
A APPENDIX
A.1 PROOF OF LEMMA 4.1
The logits evolution at one step can be written as:
∥∆f(wt,X )−∆Sf(wt,X )∥ = η∥Θt(X,X )∇f l(X,wt)−Θt(X,XS)∇f l(X,wt)∥ (13) = η∥J (wt,X )J (wt,X )T∇f l(X,wt)−J (wt,X )J (wt,XS)T∇f l(XS ,wt)∥
(14)
≤ η∥J (wt,X )∥ · ∥J (wt,X )T∇f l(X,wt)− J (wt,XS)T∇f l(XS ,wt)∥ (15) ≤ ∥ ∑ i∈V ∇l(f(wt,xi))− ∑ j∈S ∇l(f(wt,xj))∥ (16)
≤ ∥ ∑ z∈[ζ] ∑ j∈Vz ∇l(f(wt,xj), yj)∥ (17)
≤ ∑ z∈[ζ] ∑ j∈Vz ∥∇l(f(wt,xj), yj)∥ (18)
≤ ∑ z∈[ζ] |α′z − αz| · ∥max j∈Vz ∇l(f(wt,xj), yj)∥, (19)
where Eq. equation 14 holds because η ≤ 1/J (wt,X ).
A.2 EXPERIMENTATION DETAILS
A.2.1 DATASETS
CMNIST We construct a colored MNIST dataset with spurious correlations by using colors as the spurious attributes as the following. First, we define an image classification task with 5 classes by mapping every 2 consecutive digits (0 and 1, 2 and 3, 4 and 5, 6 and 7, 8 and 9) into the same class. We use the official test split of MNIST, randomly select 50k examples from the train split as the training set, and then use the rest 10k samples in the train split as the validation set.
Then, for each class yi, we color the foreground of pcorr,i fraction of training examples with color ai from the set of colorsA={#ff0000, #85ff00, #00fff3, #6e00ff, #ff0018} represented by their hex codes. We call this fraction of data the majority group of class yi. The higher the pcorr,i, the stronger the spurious correlation between the class yi and the spurious attribute ai. For the rest 1−pcorr,i training examples, we color them with a random color fromA\ai. In Fig. 6, we visualize examples in 5 classes with 5 colors and highlight the majority groups with white bounding boxes. In our experiments, we used pcorr = [0.995, 0.95, 0.9, 0.8, 0.6] to construct spurious correlations with different strengths and groups with different sizes.
A.3 ADDITIONAL EXPERIMENTS | 1. What are the strengths and weaknesses of the proposed method in terms of its ability to discover large subpopulations and improve worst-group accuracy without training group information?
2. How does the proposed method compare to other approaches that aim to improve sub-group robustness without training group information, such as data pruning oriented methods and domain generalization methods?
3. Are there any ablation studies conducted to evaluate the effectiveness of different components of the proposed method, such as the inverse importance sampling and the choice of epochs for gradient trajectories?
4. How does the proposed method impact the computational costs compared to other baseline methods, and how can we efficiently implement it?
5. Can you provide more details on what constitutes a "large subpopulation" and how it is defined in the context of the CIFAR dataset?
6. How does the proposed method handle permutation invariance, and can you provide examples or simulations to demonstrate its effectiveness in various scenarios?
7. Can you explain why directly using the inverse of the size of the subpopulation leads to better outcomes than other heuristic approaches, such as square-root inverse weighting?
8. How does the proposed method position itself in the existing literature on improving worst-group accuracy without group annotations, and how does it contribute to advancing this line of research? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the problem of (in-distribution) generalization when facing potential subgroups / subpopulations in the training data, without knowing the group annotations. The paper claims that tracking gradient trajectories of examples in initial epochs allows for finding large subpopulations of data points. It also proposes an importance sampling method that is biased towards selecting smaller subpopulations, which is claimed to eliminate bias in the large subpopulations. Experiments on several datasets show that the method might be effective over several baselines in both in-distribution or out-of-distribution generalization.
Strengths And Weaknesses
Strengths
The idea of leveraging gradient trajectory for subgroup discovery and robustness seems to be novel and not explored before in this field.
Weaknesses
Unfortunately, there are multiple major weaknesses exist in the current paper.
Motivation
I do not buy the motivation argument that "large subpopulations are responsible for forming the initial linear model".
Specifically, the descriptions in Section 4.1 are messy and unclear, sometimes even not readable (see comments later in "Writing"). First, mutual information is never formally defined or described in the text, but appears in Figure 2 -- how do you compute (or estimate) the mutual information? Is the computation the same across methods?
How do you obtain "large subpopulation" in this toy example? What is "large subpopulation" and how is it defined w.r.t. CIFAR dataset?
Does this observation persist across different datasets and different network architectures? The authors used a 4-layer CNN -- again, the detailed architecture is not explained, and the rationale behind using this architecture is not justified (why not using a standard ResNet-18?). The obseravation could be easily driven by the fact that the capacity of the network is limited. That being said, different datasets and/or architectures might directly influence the observation, and more justifications are needed; otherwise it is not convincing.
Related to the point above, Figure 2 in Section 4.1 is not understandable. There's no text describing the figure. Fig 2b is completely missing in the text, rather the authors refer to Figure 6 in the last paragraph of Section 4.1, which is a label distribution figure in Appendix.
Method
For the importance sampling method, why directly use the inverse of the size of the subpopulation? Samples and subpopulations could have hierarchy and semantic similarities. Past works also show square-root inverse weighting can be empirially better than inverse weighting. Please explain the rationale, either theoretically or empirically, that this leads to better outcomes than other heuristic approaches (e.g., square-root inverse weighting).
Related work
The related work is somehow misleading and not comprehensive. The authors discuss only "Data pruning" oriented methods, either for in-distribution (avg / worst) or out-of-distribution generalization. However, the real theme of the paper is how to improve sub-group robustness without training group information, and this very related line of works is rather totally missing. To name a few methods along this direction (improving worst-group accuracy without group annotations): [1]-[5].
(also weaknesses in experiments) As stated above, the line of work on worst-group accuracy without group annotations should also be compared in the experiments to show what advantage this paper brings, and to properly position this paper in the literature.
Experiments
As detailed in the "Related work" part, a major drawback of the paper is that it fails to compare with the actual line of works that is mostly related, i.e., improving worst-group accuracy without group annotations [1]-[5]. Without comparing to strong baselines as [1]-[5], the performance is not justifiable or convincing.
Moreover, for "out-of-distribution generalization" part, the paper also failed to compare with SOTA domain generlization (DG) approaches, including [6, 7] as well as those competitive methods in the DomainBed benchmarks [8]. Again, the selected baselines are no longer SOTA, and without comparing to the aforementioned strong baselines, the results are not convincing.
Related to the questions above, I wonder why not the authors directly test on the DomainBed benchmark. Directly testing on DG benchmarks [7] would be the most reasonable choice to fairly evaluate the proposed method to other algorithms in DomainBed.
The choice of first several epochs for gradient trajectories seems to be an important hyper-parameter for the method. However, the number in the experiments seems to be rather randomly chosen; there are also no ablation studies on the number of epochs, or how does the number affect the performance.
Related to the questions above, there are no ablation studies on the "inverse importance sampling" part. Whether this is necessary, or it is better than other heuristic approaches (e.g., square-root inverse weighting), is completely missing.
The "gradient trajectory" idea is plausible, but also introduces potentially high computational costs. For example, ERM does not need any tracking or storing intermediate variables. Although the authors claim in the paper that the method can be implemented efficiently, I would like to see the actual computational cost w.r.t. time and memory costs. Wall-clock training time comparison as well as memory consumption to other baseline methods would be good.
Despite of mentioning "large subpopulation" in the methods, the results never show what is the actual "large subpopulation" the model discovers, and how meaningful it is.
Writing
The overall writing quality is bad. The writing needs to be significantly improved.
For example, last paragraph in page 4 is not even self-contained or readable. The first sentence in this paragraph is not grammarly correct. Moreover, "Fig. 6" mentioned in the text refers to a figure in the Appendix showing the label distribution, but has nothing to do with the texts here.
typo: line 4 page 4 -
y
f
(
w
,
x
i
)
References
[1] Liu et al. Just Train Twice: Improving Group Robustness without Training Group Information. ICML 2021.
[2] Zhang et al. Correct-N-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations. ICML 2022.
[3] Nam et al. Spread spurious attribute: Improving worst-group accuracy with spurious attribute estimation. ICLR 2022.
[4] Lahoti et al. Fairness without Demographics through Adversarially Reweighted Learning. NeurIPS 2020.
[5] Creager et al. Environment Inference for Invariant Learning. ICML 2021.
[6] Cha et al. SWAD: Domain Generalization by Seeking Flat Minima. 2021.
[7] Arpit et al. Ensemble of averages: Improving model selection and boosting performance in domain generalization. 2021.
[8] Gulrajani et al. In search of lost domain generalization. 2021.
Clarity, Quality, Novelty And Reproducibility
Clarity
Bad.
The descriptions in Section 4.1 (motivation part) are messy and unclear, sometimes even not readable (see comments later in "Writing"). First, mutual information is never formally defined or described in the text, but appears in Figure 2 -- how do you compute (or estimate) the mutual information? Is the computation the same across methods?
The overall writing quality is bad. The writing needs to be significantly improved.
Please refer to the weaknesses part for a complete review.
Quality
Not good for the current manuscript.
There are major weaknesses in Motivation, Methods, Related work, and Exepriments.
Please refer to the weaknesses part for a complete review.
Novelty
Fair.
I understand that the idea of leveraging gradient trajectory for subgroup discovery and robustness seems to be novel and not explored before in this field. However, the literature review is pretty limited, and it seems that the paper is not well positioned w.r.t. the literature.
Reproducibility
Fair.
No code is provided along with the submission. The pseudo code is provided for the algorithm. |
ICLR | Title
Rethinking Numerical Representations for Deep Neural Networks
Abstract
With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms. We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6× with less than 1% degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point. To facilitate the use of such customized precision, we also present a novel technique that drastically reduces the time required to derive the optimal precision configuration.
1 INTRODUCTION
Recently, deep neural networks (DNNs) have yielded state-of-the-art performance on a wide array of AI tasks, including image classification Krizhevsky et al. (2012), speech recognition Hannun et al. (2014), and language understanding Sutskever et al. (2014). In addition to algorithmic innovations Nair & Hinton (2010); Srivastava et al. (2014); Taigman et al. (2014), a key driver behind these successes are advances in computing infrastructure that enable large-scale deep learning—the training and inference of large DNN models on massive datasets Dean et al. (2012); Farabet et al. (2013). Indeed, highly efficient GPU implementations of DNNs played a key role in the first breakthrough of deep learning for image classification Krizhevsky et al. (2012). Given the ever growing amount of data available for indexing, analysis, and training, and the increasing prevalence of everlarger DNNs as key building blocks for AI applications, it is critical to design computing platforms to support faster, more resource-efficient DNN computation.
A set of core design decisions are common to the design of these infrastructures. One such critical choice is the numerical representation and precision used in the implementation of underlying storage and computation. Several recent works have investigated the numerical representation for DNNs Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014); Muller & Indiveri (2015). One recent work found that substantially lower precision can be used for training when the correct numerical rounding method is employed Gupta et al. (2015). Their work resulted in the design of a very energy-efficient DNN platform.
This work and other previous numerical representation studies for DNNs have either limited themselves to a small subset of the customized precision design space or drew conclusions using only small neural networks. For example, the work from Gupta et al. 2015 evaluates 16-bit fixed-point and wider computational precision on LeNet-5 LeCun et al. (1998) and CIFARNET Krizhevsky & Hinton (2009). The fixed-point representation (Figure 1) is only one of many possible numeric representations. Exploring a limited customized precision design space inevitably results in designs lacking in energy efficiency and computational performance. Evaluating customized precision accuracy based on small neural networks requires the assumption that much larger, production-grade neural networks would operate comparably when subjected to the same customized precision.
In this work, we explore the accuracy-efficiency trade-off made available via specialized customprecision hardware for inference and present a method to efficiently traverse this large design space to find an optimal design. Specifically, we evaluate the impact of a wide spectrum of customized
integer fraction
11001.01110 |||||||||| ......
Figure 1: A fixed-point representation. Hardware parameters include the total number of bits and the position of the radix point.
x2
mantissa
1.01101 ||||| ...
exponent
10011 ||||| ... - bias
Figure 2: A floating-point representation. Hardware parameters include the number of mantissa and exponent bits, and the bias.
precision settings for fixed-point and floating-point representations on accuracy and computational performance. We evaluate these customized precision configurations on large, state-of-the-art neural networks. By evaluating the full computational precision design space on a spectrum of these production-grade DNNs, we find that:
1. Precision requirements do not generalize across all neural networks. This prompts designers of future DNN infrastructures to carefully consider the applications that will be executed on their platforms, contrary to works that design for large networks and evaluate accuracy on small networks Cavigelli et al. (2015); Chen et al. (2014).
2. Many large-scale DNNs require considerably more precision for fixed-point arithmetic than previously found from small-scale evaluations Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014). For example, we find that GoogLeNet requires on the order of 40 bits when implemented with fixed-point arithmetic, as opposed to less than 16 bits for LeNet-5.
3. Floating-point representations are more efficient than fixed-point representations when selecting optimal precision settings. For example, a 17-bit floating-point representation is acceptable for GoogLeNet, while over 40 bits are required for the fixed-point representation – a more expensive computation than the standard single precision floating-point format. Current platform designers should reconsider the use of the floating-point representations for DNN computations instead of the commonly used fixed-point representations Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014); Muller & Indiveri (2015).
To make these conclusions on large-scale customized precision design readily actionable for DNN infrastructure designers, we propose and validate a novel technique to quickly search the large customized precision design space. This technique leverages the activations in the last layer to build a model to predict accuracy based on the insight that these activations effectively capture the propagation of numerical error from computation. Using this method on deployable DNNs, including GoogLeNet Szegedy et al. (2015) and VGG Simonyan & Zisserman (2014), we find that using these recommendations to introduce customized precision into a DNN accelerator fabric results in an average speedup of 7.6× with less than 1% degradation in inference accuracy.
2 CUSTOMIZED PRECISION HARDWARE
We begin with an overview of the available design choices in the representation of real numbers in binary and discuss how these choices impact hardware performance.
2.1 DESIGN SPACE
We consider three aspects of customized precision number representations. First, we contrast the high-level choice between fixed-point and floating-point representations. Fixed-point binary arithmetic is computationally identical to integer arithmetic, simply changing the interpretation of each bit position. Floating-point arithmetic, however, represents the sign, mantissa, and exponent of a real number separately. Floating-point calculations involve several steps absent in integer arithmetic. In particular, addition operations require aligning the mantissas of each operand. As a result, floatingpoint computation units are substantially larger, slower, and more complex than integer units. In CPUs and GPUs, available sizes for both integers and floating-point calculations are fixed according to the data types supported by the hardware. Thus, the second aspect of precision customization we examine is to consider customizing the number of bits used in representing floating-point and fixed-point numbers. Third, we may vary the interpretation of fixed-point numbers and assignment of bits to the mantissa and exponent in a floating-point value.
2.2 CUSTOMIZED PRECISION TYPES
In a fixed-point representation, we select the number of bits as well as the position of the radix point, which separates integer and fractional bits, as illustrated in Figure 1. A bit array, x, encoded in fixed point with the radix point at bit l (counting from the right) represents the value 2−l ∑N−1 i=0 2 i · xi.
In contrast to floating point, fixed-point representations with a particular number of bits have a fixed level of precision. By varying the position of the radix point, we change the representable range.
An example floating-point representation is depicted in Figure 2. As shown in the figure, there are three parameters to select when designing a floating-point representation: the bit-width of the mantissa, the bit-width of the exponent, and an exponent bias. The widths of the mantissa and exponent control precision and dynamic range, respectively. The exponent bias adjusts the offset of the exponent (which is itself represented as an unsigned integer) relative to zero to facilitate positive and negative exponents. Finally, an additional bit represents the sign. Thus, a floating-point format with Nm mantissa bits, Ne exponent bits, and a bias of b, encodes the value
2( ∑Ne−1 i=0 2i·ei)−b(1 + ∑Nm i=1 2
−i ·mi), where m and e are the segments of a bit array representing the mantissa and exponent, respectively. Note that the leading bit of the mantissa is assumed to be 1 and hence is not explicitly stored, eliminating redundant encodings of the same value. A singleprecision value in the IEEE-754 standard (i.e. float) comprises 23 mantissa bits, 8 exponent bits, and a sign bit. IEEE-754 standardized floating-point formats include special encodings for specific values, such as zero and infinity. Both fixed-point and floating-point representations have limitations in terms of the precision and the dynamic ranges available given particular representations, manifesting themselves computationally as rounding and saturation errors. These errors propagate through the deep neural network in a way that is difficult to estimate holistically, prompting experimentation on the DNN itself.
2.3 HARDWARE IMPLICATIONS
The key hardware building block for implementing DNNs is the multiply-accumulate (MAC) operation. The MAC operation implements the sum-of-products operation that is fundamental to the activation of each neuron. We show a high-level hardware block diagram of a MAC unit in Figure 3 (a). Figure 3 (b) adds detail for the addition operation, the more complex of the two operations. As seen in the figure, floating-point addition operations involve a number of sub-components that compare exponents, align mantissas, perform the addition, and normalize the result. Nearly all of the sub-components of the MAC unit scale in speed, power, and area with the bit width.
Reducing the floating-point bit width improves hardware performance in two ways. First, reduced bit width makes a computation unit faster. Binary arithmetic computations involve chains of logic operations that typically grows at least logarithmically, and sometimes linearly (e.g., the propagation of carries in an addition, see Figure 3 (c)), in the number of bits. Reducing the bit width reduces the length of these chains, allowing the logic to operate at a higher clock frequency. Second, reduced bit width makes a computation unit smaller and require less energy, typically linearly in the number of bits. The circuit delay and area is shown in Figure 4 when the mantissa bit widths are varied. As shown in the figure, scaling the length of the mantissa provides substantial opportunity because it defines the size of the internal addition unit. Similar trends follow for bit-widths in other representations. When a unit is smaller, more replicas can fit within the same chip area and power budget, all of which can operate in parallel. Hence, for computations like those in DNNs, where ample parallelism is available, area reductions translate into proportional performance improvement.
This trend of bit width versus speed, power, and area is applicable to every computation unit in hardware DNN implementations. Thus, in designing hardware that uses customized representations
there is a trade-off between accuracy on the one hand and power, area, and speed on the other. Our goal is to use precision that delivers sufficient accuracy while attaining large improvements in power, area, and speed over standard floating-point designs.
3 METHODOLOGY
We describe the methodology we use to evaluate the customized precision design space, using image classification tasks of varying complexity as a proxy for computer vision applications. We evaluate DNN implementations using several metrics, classification accuracy, speedup, and energy savings relative to a baseline custom hardware design that uses single-precision floating-point representations. Using the results of this analysis, we propose and validate a search technique to efficiently determine the correct customized precision design point.
3.1 ACCURACY
We evaluate accuracy by modifying the Caffe Jia et al. (2014) deep learning framework to perform calculations with arbitrary fixed-point and floating-point formats. We continue to store values as C floats in Caffe, but truncate the mantissa and exponent to the desired format after each arithmetic operation. Accuracy, using a set of test inputs disjoint from the training input set, is then measured by running the forward pass of a DNN model with the customized format and comparing the outputs with the ground truth. We use the standard accuracy metrics that accompany the dataset for each DNN. For MNIST (LeNet-5) and CIFAR-10 (CIFARNET) we use top-1 accuracy and for ImageNet (GoogLeNet, VGG, and AlexNet) we use top-5 accuracy. Top-1 accuracy denotes the percent of inputs that the DNN predicts correctly after a single prediction attempt, while top-5 accuracy represents the percent of inputs that DNN predicts correctly after five attempts.
3.2 EFFICIENCY
We quantify the efficiency advantages of customized floating-point representations by designing a floating-point MAC unit in each candidate precision and determining its silicon area and delay characteristics. We then report speedup and energy savings relative to a baseline custom hardware implementation of a DNN that uses standard single-precision floating-point computations. We design each variant of the MAC unit using Synopsys Design Compiler and Synopsys PrimeTime, industry standard ASIC design tools, targeting a commercial 28nm silicon manufacturing process. The tools report the power, delay, and area characteristics of each precision variant. As shown in Figure 5, we compute speedups and energy savings relative to the standardized IEEE-754 floating-point representation considering both the clock frequency advantage and improved parallelism due to area reduction of the narrower bit-width MAC units. This allows customized precision designs to yield a quadratic improvement in total system throughput.
3.3 EFFICIENT CUSTOMIZED PRECISION SEARCH
To exploit the benefits of customized precision, a mechanism to select the correct configuration must be introduced. There are hundreds of designs among floating-point and fixed-point formats due to designs varying by the total bit width and the allocation of those bits. This spectrum of designs strains the ability to select an optimal configuration. A straightforward approach to select the customized precision design point is to exhaustively compute the accuracy of each design with a large number of neural network inputs. This strategy requires substantial computational resources that are proportional to the size of the network and variety of output classifications. We describe our technique that significantly reduces the time required to search for the correct configuration in order to facilitate the use of customized precision.
The key insight behind our search method is that customized precision impacts the underlying internal computation, which is hidden by evaluating only the NN final accuracy metric. Thus, instead
of comparing the final accuracy generated by networks with different precision configurations, we compare the original NN activations to the customized precision activations. This circumvents the need to evaluate the large number of inputs required to produce representative neural network accuracy. Furthermore, instead of examining all of the activations, we only analyze the last layer, since the last layer captures the usable output from the neural network as well as the propagation of lost accuracy. Our method summarizes the differences between the last layer of two configurations by calculating the linear coefficient of determination between the last layer activations.
A method to translate the coefficient of determination to a more desirable metric, such as end-to-end inference accuracy, is necessary. We find that a linear model provides such a transformation. The customized precision setting with the highest speedup that meets a specified accuracy threshold is then selected. In order to account for slight inaccuracies in the model, inference accuracy for a subset of configurations is evaluated. If the configuration provided by the accuracy model results in insufficient accuracy, then an additional bit is added and the process repeats. Similarly, if the accuracy threshold is met, then a bit is removed from the customized precision format.
4 EXPERIMENTS
In this section, we evaluate five common neural networks spanning a range of sizes and depths in the context of customized precision hardware. We explore the trade-off between accuracy and efficiency when various customized precision representations are employed. Next, we address the sources of accuracy degradation when customized precision is utilized. Finally, we examine the characteristics of our customized precision search technique.
4.1 EXPERIMENTAL SETUP
We evaluate the accuracy of customized precision operations on five DNNs: GoogLeNet Szegedy et al. (2015), VGG Simonyan & Zisserman (2014), AlexNet Krizhevsky et al. (2012), CIFARNET Krizhevsky & Hinton (2009), and LeNet-5 LeCun et al. (1998). The implementations and pre-trained weights for these DNNs were taken from Caffe Jia et al. (2014). The three largest DNNs (GoogLeNet, VGG, and AlexNet) represent real-world workloads, while the two smaller DNNs (CIFARNET and LeNet-5) are the largest DNNs evaluated in prior work on customized precision. For each DNN, we use the canonical benchmark validation set: ImageNet for GoogLeNet, VGG, and AlexNet; CIFAR-10 for CIFARNET; MNIST for LeNet-5. We utilize the entire validation set for all experiments, except for GoogLeNet and VGG experiments involving the entire design space. In these cases we use a randomly-selected 1% of the validation set to make the experiments tractable.
4.2 ACCURACY VERSUS EFFICIENCY TRADE-OFFS
To evaluate the benefits of customized precision hardware, we swept the design space for accuracy and performance characteristics. This performance-accuracy trade off is shown in Figure 6. This figure shows the DNN inference accuracy across the full input set versus the speedup for each of the five DNN benchmarks. The black star represents the IEEE 754 single precision representation (i.e. the original accuracy with 1× speedup), while the red circles and blue triangles represent the complete set of our customized precision floating-point and fixed-point representations, respectively.
For GoogLeNet, VGG, and AlexNet it is clear that the floating-point format is superior to the fixedpoint format. In fact, the standard single precision floating-point format is faster than all fixedpoint configurations that achieve above 40% accuracy. Although fixed-point computation is simpler and faster than floating-point computation when the number of bits is fixed, customized precision floating-point representations are more efficient because less bits are needed for similar accuracy.
al
By comparing the results across the five different networks in Figure 6, it is apparent that the size and structure of the network impacts the customized precision flexibility of the network. This insight suggests that hardware designers should carefully consider which neural network(s) they expect their device to execute as one of the fundamental steps in the design process. The impact of network size on accuracy is discussed in further detail in the following section.
The specific impact of bit assignments on performance and energy efficiency are illustrated in Figure 7. This figure shows the the speedup and energy improvements over the single precision floatingpoint representation as the number of allocated bits is varied. For the floating-point representations, the number of bits allocated for the mantissa (x-axis) and exponent (y-axis) are varied. For the fixedpoint representations, the number of bits allocated for the integer (x-axis) and fraction (y-axis) are varied. We highlight a region in the plot deemed to have acceptable accuracy. In this case, we define acceptable accuracy to be 99% normalized AlexNet accuracy (i.e., no less than a 1% degradation in accuracy from the IEEE 754 single precision accuracy on classification in AlexNet).
The fastest and most energy efficient representation occurs at the bottom-left corner of the region with acceptable accuracy, since a minimal number of bits are used. The configuration with the highest performance that meets this requirement is a floating-point representation with 6 exponent bits and 7 mantissa bits, which yields a 7.2× speedup and a 3.4× savings in energy over the single precision IEEE 754 floating-point format. If a more stringent accuracy requirement is necessary, 0.3% accuracy degradation, the representation with one additional bit in the mantissa can be used, which achieves a 5.7× speedup and 3.0× energy savings.
4.3 SOURCES OF ACCUMULATION ERROR
In order to understand how customized precision degrades DNN accuracy among numeric representations, we examine the impact of various reduced precision computations on a neuron. Figure 8 presents the serialized accumulation of neuron inputs in the third convolution layer of AlexNet. The x-axis represents the number of inputs that have been accumulated, while the y-axis represents the current value of the running sum. The black line represents the original DNN computation, a baseline for customized precision settings to match. We find two causes of error between the customized precision fixed-point and floating-point representations, saturation and excessive rounding.
In the fixed-point case (green line, representing 16 bits with the radix point in the center), the central cause of error is from saturation at the extreme values. The running sum exceeds 255, the maximum representable value in this representation, after 60 inputs are accumulated, as seen in the figure.
After reaching saturation, the positive values are discarded and the final output is unpredictable. Although floating-point representations do not saturate as easily, the floating-point configuration with 10 mantissa bits and 4 exponent bits (orange line) saturates after accumulating 1128 inputs. Again, the lost information from saturation causes an unpredictable final output.
For the next case, the floating-point configuration with 2 bits and 14 bits for the mantissa and exponent (blue line), respectively, we find that the lack of precision for large values causes excessive rounding errors. As shown in the figure, after accumulating 120 inputs, this configuration’s running sum exceeds 256, which limits the minimum adjustment in magnitude to 64 (the exponent normalizes the mantissa to 256, so the two mantissa bits represent 128 and 64). Finally, one of the customized precision types that has high performance and accuracy for AlexNet, 8 mantissa bits and 6 exponent bits (red line), is shown as well. This configuration almost perfectly matches the IEEE 754 floating-point configuration, as expected based on the final output accuracy.
The other main cause of accuracy loss is from values that are too small to be encoded as a non-zero value in the chosen customized precision configuration. These values, although not critical during addition, cause significant problems when multiplied with a large value, since the output should be encoded as a non-zero value in the specific precision setting. We found that the weighted input is minimally impacted, until the precision is reduced low enough for the weight to become zero.
While it may be intuitive based on these results to apply different customized precision settings to various stages of the neural network in order to mitigate the sudden loss in accuracy, the realizable gains of multi-precision configurations present significant challenges. The variability between units will cause certain units to be unused during specific layers of the neural network causing gains to diminish (e.g., 11-bit units are idle when 16-bit units are required for a particular layer). Also, the application specific hardware design is already an extensive process and multiple customized precision configurations increases the difficulty of the hardware design and verification process.
4.4 CUSTOMIZED PRECISION SEARCH
Now we evaluate our proposed customized precision search method. The goal of this method is to significantly reduce the required time to navigate the customized precision design space and still provide an optimal design choice in terms of speedup, limited by an accuracy constraint.
Correlation model. First, we present the linear correlation-accuracy model in Figure 9, which shows the relationship between the normalized accuracy of each setting in the design space and the correlation between its last layer activations compared to those of the original NN. This model, although built using all of the customized precision configurations from AlexNet, CIFARNET, and LeNet5 neural networks, produces a good fit with a correlation of 0.96. It is important that the model matches across networks and precision design choices (e.g., floating point versus fixed point), since creating this model for each DNN, individually, requires as much time as exhaustive search.
Validation. To validate our search technique, Figure 10 presents the accuracy-speedup trade-off curves from our method compared to the ideal design points. We first obtain optimal results via
exhaustive search. We present our search with a variable number of refinement iterations, where we evaluate the accuracy of the current design point and adjust the precision if necessary. To verify robustness, the accuracy models were generated using cross-validation where all configurations in the DNN being searched are excluded (e.g., we build the AlexNet model with LeNet and CIFARNET accuracy/correlation pairs). The prediction is made using only ten randomly selected inputs, a tiny subset compared that needed for classification accuracy, some of which are even incorrectly classified by the original neural network. Thus, the cost of prediction using the model is negligible.
We observe that, in all cases, the accuracy model combined with the evaluation of just two customized precision configurations provides the same result as the exhaustive search. Evaluating two designs out of 340 is 170× faster than exhaustively evaluating all designs. When only one configuration is evaluated instead of two (i.e. a further 50% reduction is search time), the selected customized precision setting never violates the target accuracy, but concedes a small amount of performance. Finally, we note that our search mechanism, without evaluating inference accuracy for any of the design points, provides a representative prediction of the optimal customized precision setting. Although occasionally violating the target accuracy (i.e. the cases where the speedup is higher than the exhaustive search), this prediction can be used to gauge the amenability of the NN to customized precision without investing any considerable amount of time in experimentation.
Speedup. We present the final speedup produced by our search method in Figure 11 when the algorithm is configured for 99% target accuracy and to use two samples for refinement. In all cases, the chosen customized precision configuration meets the targeted accuracy constraint. In most cases, we find that the larger networks require more precision (DNNs are sorted from left to right in descending order based on size). VGG requires less precision than expected, but VGG also uses smaller convolution kernels than all of the other DNNs except LeNet-5.
5 RELATED WORK
To the best of our knowledge, our work is the first to examine the impact of numeric representations on the accuracy-efficiency trade-offs on large-scale, deployed DNNs with over half a million neurons (GoogLeNet, VGG, AlexNet), whereas prior work has only reported results on much smaller networks such as CIFARNET and LeNet-5 Cavigelli et al. (2015); Chen et al. (2014); Courbariaux et al. (2014); Du et al. (2014); Gupta et al. (2015); Muller & Indiveri (2015). Many of these works focused on fixed-point computation due to the fixed-point representation working well on small-scale neural networks. We find very different conclusions when considering production-ready DNNs.
Other recent works have looked at alternative neural network implementations such as spiking neural networks for more efficient hardware implementation Conti & Benini (2015); Diehl & Cook (2014). This is a very different computational model that requires redevelopment of standard DNNs, unlike our proposed methodologies. Other works have proposed several approaches to improve performance and reduce energy consumption of deep neural networks by taking advantage of the fact that DNNs usually contain redundancies Chen et al. (2015); Figurnov et al. (2015).
6 CONCLUSION
In this work, we introduced the importance of carefully considering customized precision when realizing neural networks. We show that using the IEEE 754 single precision floating point representation in hardware results in surrendering substantial performance. On the other hand, picking a configuration that has lower precision than optimal will result in severe accuracy loss. By reconsidering the representation from the ground up in designing custom precision hardware and using our search technique, we find an average speedup across deployable DNNs, including GoogLeNet and VGG, of 7.6× with less than 1% degradation in inference accuracy. | 1. What is the focus of the paper regarding neural network representation?
2. What are the strengths of the proposed approach, particularly in terms of accuracy, speed, and energy consumption?
3. What are the weaknesses of the paper, especially regarding its scope, simulations, and omissions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are some suggestions for improving the paper, including discussions on quantization tricks, hardware considerations, and parallelization? | Review | Review
The paper studies the impact of using customized number representations on accuracy, speed, and energy consumption of neural network inference. Several standard computer vision architectures including VGG and GoogleNet are considered for the experiments, and it is concluded that floating point representations are preferred over fixed point representations, and floating point numbers with about 14 bits are sufficient for the considered architectures resulting in a small loss in accuracy.
The paper provides a nice overview of floating and fixed point representations and focuses on an important aspect of deep learning that is not well studied. There are several aspects of the paper that could be improved, but overall, I am leaned toward weak accept assuming that the authors address the issues below.
1- The paper is not clear that it is only focusing on neural network inference. Please include the word "inference" in the title / abstract to clarify this point and mention that the findings of the paper do not necessarily apply to neural network training as training dynamics could be different.
2- The paper does not discuss the possibility of adopting quantization tricks during training, which may result in the use of fewer bits at inference.
3- The paper is not clear whether in computing the running time and power consumption, it includes all of the modules or only multiply-accumulate units? Also, how accurate are these numbers given different possible designs and the potential difference between simulation and production? Please elaborate on the details of simulation in the paper.
4- The whole discussion about "efficient customized precision search" seem unimportant to me. When such important hardware considerations are concerned, even spending 20x simulation time is not that important. The exhaustive search process could be easily parallelized and one may rather spend more time at simulation at the cost of finding the exact best configuration rather than an approximation. That said, weak configurations could be easily filtered after evaluating just a few examples.
5- Nvidia's Pascal GP100 GPU supports FP16. This should be discussed in the paper and relevant Nvidia papers / documents should be cited.
More comments:
- Parts of the paper discussing "efficient customized precision search" are not clear to me.
- As future work, the impact of number representations on batch normalization and recurrent neural networks could be studied. |
ICLR | Title
Rethinking Numerical Representations for Deep Neural Networks
Abstract
With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms. We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6× with less than 1% degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point. To facilitate the use of such customized precision, we also present a novel technique that drastically reduces the time required to derive the optimal precision configuration.
1 INTRODUCTION
Recently, deep neural networks (DNNs) have yielded state-of-the-art performance on a wide array of AI tasks, including image classification Krizhevsky et al. (2012), speech recognition Hannun et al. (2014), and language understanding Sutskever et al. (2014). In addition to algorithmic innovations Nair & Hinton (2010); Srivastava et al. (2014); Taigman et al. (2014), a key driver behind these successes are advances in computing infrastructure that enable large-scale deep learning—the training and inference of large DNN models on massive datasets Dean et al. (2012); Farabet et al. (2013). Indeed, highly efficient GPU implementations of DNNs played a key role in the first breakthrough of deep learning for image classification Krizhevsky et al. (2012). Given the ever growing amount of data available for indexing, analysis, and training, and the increasing prevalence of everlarger DNNs as key building blocks for AI applications, it is critical to design computing platforms to support faster, more resource-efficient DNN computation.
A set of core design decisions are common to the design of these infrastructures. One such critical choice is the numerical representation and precision used in the implementation of underlying storage and computation. Several recent works have investigated the numerical representation for DNNs Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014); Muller & Indiveri (2015). One recent work found that substantially lower precision can be used for training when the correct numerical rounding method is employed Gupta et al. (2015). Their work resulted in the design of a very energy-efficient DNN platform.
This work and other previous numerical representation studies for DNNs have either limited themselves to a small subset of the customized precision design space or drew conclusions using only small neural networks. For example, the work from Gupta et al. 2015 evaluates 16-bit fixed-point and wider computational precision on LeNet-5 LeCun et al. (1998) and CIFARNET Krizhevsky & Hinton (2009). The fixed-point representation (Figure 1) is only one of many possible numeric representations. Exploring a limited customized precision design space inevitably results in designs lacking in energy efficiency and computational performance. Evaluating customized precision accuracy based on small neural networks requires the assumption that much larger, production-grade neural networks would operate comparably when subjected to the same customized precision.
In this work, we explore the accuracy-efficiency trade-off made available via specialized customprecision hardware for inference and present a method to efficiently traverse this large design space to find an optimal design. Specifically, we evaluate the impact of a wide spectrum of customized
integer fraction
11001.01110 |||||||||| ......
Figure 1: A fixed-point representation. Hardware parameters include the total number of bits and the position of the radix point.
x2
mantissa
1.01101 ||||| ...
exponent
10011 ||||| ... - bias
Figure 2: A floating-point representation. Hardware parameters include the number of mantissa and exponent bits, and the bias.
precision settings for fixed-point and floating-point representations on accuracy and computational performance. We evaluate these customized precision configurations on large, state-of-the-art neural networks. By evaluating the full computational precision design space on a spectrum of these production-grade DNNs, we find that:
1. Precision requirements do not generalize across all neural networks. This prompts designers of future DNN infrastructures to carefully consider the applications that will be executed on their platforms, contrary to works that design for large networks and evaluate accuracy on small networks Cavigelli et al. (2015); Chen et al. (2014).
2. Many large-scale DNNs require considerably more precision for fixed-point arithmetic than previously found from small-scale evaluations Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014). For example, we find that GoogLeNet requires on the order of 40 bits when implemented with fixed-point arithmetic, as opposed to less than 16 bits for LeNet-5.
3. Floating-point representations are more efficient than fixed-point representations when selecting optimal precision settings. For example, a 17-bit floating-point representation is acceptable for GoogLeNet, while over 40 bits are required for the fixed-point representation – a more expensive computation than the standard single precision floating-point format. Current platform designers should reconsider the use of the floating-point representations for DNN computations instead of the commonly used fixed-point representations Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014); Muller & Indiveri (2015).
To make these conclusions on large-scale customized precision design readily actionable for DNN infrastructure designers, we propose and validate a novel technique to quickly search the large customized precision design space. This technique leverages the activations in the last layer to build a model to predict accuracy based on the insight that these activations effectively capture the propagation of numerical error from computation. Using this method on deployable DNNs, including GoogLeNet Szegedy et al. (2015) and VGG Simonyan & Zisserman (2014), we find that using these recommendations to introduce customized precision into a DNN accelerator fabric results in an average speedup of 7.6× with less than 1% degradation in inference accuracy.
2 CUSTOMIZED PRECISION HARDWARE
We begin with an overview of the available design choices in the representation of real numbers in binary and discuss how these choices impact hardware performance.
2.1 DESIGN SPACE
We consider three aspects of customized precision number representations. First, we contrast the high-level choice between fixed-point and floating-point representations. Fixed-point binary arithmetic is computationally identical to integer arithmetic, simply changing the interpretation of each bit position. Floating-point arithmetic, however, represents the sign, mantissa, and exponent of a real number separately. Floating-point calculations involve several steps absent in integer arithmetic. In particular, addition operations require aligning the mantissas of each operand. As a result, floatingpoint computation units are substantially larger, slower, and more complex than integer units. In CPUs and GPUs, available sizes for both integers and floating-point calculations are fixed according to the data types supported by the hardware. Thus, the second aspect of precision customization we examine is to consider customizing the number of bits used in representing floating-point and fixed-point numbers. Third, we may vary the interpretation of fixed-point numbers and assignment of bits to the mantissa and exponent in a floating-point value.
2.2 CUSTOMIZED PRECISION TYPES
In a fixed-point representation, we select the number of bits as well as the position of the radix point, which separates integer and fractional bits, as illustrated in Figure 1. A bit array, x, encoded in fixed point with the radix point at bit l (counting from the right) represents the value 2−l ∑N−1 i=0 2 i · xi.
In contrast to floating point, fixed-point representations with a particular number of bits have a fixed level of precision. By varying the position of the radix point, we change the representable range.
An example floating-point representation is depicted in Figure 2. As shown in the figure, there are three parameters to select when designing a floating-point representation: the bit-width of the mantissa, the bit-width of the exponent, and an exponent bias. The widths of the mantissa and exponent control precision and dynamic range, respectively. The exponent bias adjusts the offset of the exponent (which is itself represented as an unsigned integer) relative to zero to facilitate positive and negative exponents. Finally, an additional bit represents the sign. Thus, a floating-point format with Nm mantissa bits, Ne exponent bits, and a bias of b, encodes the value
2( ∑Ne−1 i=0 2i·ei)−b(1 + ∑Nm i=1 2
−i ·mi), where m and e are the segments of a bit array representing the mantissa and exponent, respectively. Note that the leading bit of the mantissa is assumed to be 1 and hence is not explicitly stored, eliminating redundant encodings of the same value. A singleprecision value in the IEEE-754 standard (i.e. float) comprises 23 mantissa bits, 8 exponent bits, and a sign bit. IEEE-754 standardized floating-point formats include special encodings for specific values, such as zero and infinity. Both fixed-point and floating-point representations have limitations in terms of the precision and the dynamic ranges available given particular representations, manifesting themselves computationally as rounding and saturation errors. These errors propagate through the deep neural network in a way that is difficult to estimate holistically, prompting experimentation on the DNN itself.
2.3 HARDWARE IMPLICATIONS
The key hardware building block for implementing DNNs is the multiply-accumulate (MAC) operation. The MAC operation implements the sum-of-products operation that is fundamental to the activation of each neuron. We show a high-level hardware block diagram of a MAC unit in Figure 3 (a). Figure 3 (b) adds detail for the addition operation, the more complex of the two operations. As seen in the figure, floating-point addition operations involve a number of sub-components that compare exponents, align mantissas, perform the addition, and normalize the result. Nearly all of the sub-components of the MAC unit scale in speed, power, and area with the bit width.
Reducing the floating-point bit width improves hardware performance in two ways. First, reduced bit width makes a computation unit faster. Binary arithmetic computations involve chains of logic operations that typically grows at least logarithmically, and sometimes linearly (e.g., the propagation of carries in an addition, see Figure 3 (c)), in the number of bits. Reducing the bit width reduces the length of these chains, allowing the logic to operate at a higher clock frequency. Second, reduced bit width makes a computation unit smaller and require less energy, typically linearly in the number of bits. The circuit delay and area is shown in Figure 4 when the mantissa bit widths are varied. As shown in the figure, scaling the length of the mantissa provides substantial opportunity because it defines the size of the internal addition unit. Similar trends follow for bit-widths in other representations. When a unit is smaller, more replicas can fit within the same chip area and power budget, all of which can operate in parallel. Hence, for computations like those in DNNs, where ample parallelism is available, area reductions translate into proportional performance improvement.
This trend of bit width versus speed, power, and area is applicable to every computation unit in hardware DNN implementations. Thus, in designing hardware that uses customized representations
there is a trade-off between accuracy on the one hand and power, area, and speed on the other. Our goal is to use precision that delivers sufficient accuracy while attaining large improvements in power, area, and speed over standard floating-point designs.
3 METHODOLOGY
We describe the methodology we use to evaluate the customized precision design space, using image classification tasks of varying complexity as a proxy for computer vision applications. We evaluate DNN implementations using several metrics, classification accuracy, speedup, and energy savings relative to a baseline custom hardware design that uses single-precision floating-point representations. Using the results of this analysis, we propose and validate a search technique to efficiently determine the correct customized precision design point.
3.1 ACCURACY
We evaluate accuracy by modifying the Caffe Jia et al. (2014) deep learning framework to perform calculations with arbitrary fixed-point and floating-point formats. We continue to store values as C floats in Caffe, but truncate the mantissa and exponent to the desired format after each arithmetic operation. Accuracy, using a set of test inputs disjoint from the training input set, is then measured by running the forward pass of a DNN model with the customized format and comparing the outputs with the ground truth. We use the standard accuracy metrics that accompany the dataset for each DNN. For MNIST (LeNet-5) and CIFAR-10 (CIFARNET) we use top-1 accuracy and for ImageNet (GoogLeNet, VGG, and AlexNet) we use top-5 accuracy. Top-1 accuracy denotes the percent of inputs that the DNN predicts correctly after a single prediction attempt, while top-5 accuracy represents the percent of inputs that DNN predicts correctly after five attempts.
3.2 EFFICIENCY
We quantify the efficiency advantages of customized floating-point representations by designing a floating-point MAC unit in each candidate precision and determining its silicon area and delay characteristics. We then report speedup and energy savings relative to a baseline custom hardware implementation of a DNN that uses standard single-precision floating-point computations. We design each variant of the MAC unit using Synopsys Design Compiler and Synopsys PrimeTime, industry standard ASIC design tools, targeting a commercial 28nm silicon manufacturing process. The tools report the power, delay, and area characteristics of each precision variant. As shown in Figure 5, we compute speedups and energy savings relative to the standardized IEEE-754 floating-point representation considering both the clock frequency advantage and improved parallelism due to area reduction of the narrower bit-width MAC units. This allows customized precision designs to yield a quadratic improvement in total system throughput.
3.3 EFFICIENT CUSTOMIZED PRECISION SEARCH
To exploit the benefits of customized precision, a mechanism to select the correct configuration must be introduced. There are hundreds of designs among floating-point and fixed-point formats due to designs varying by the total bit width and the allocation of those bits. This spectrum of designs strains the ability to select an optimal configuration. A straightforward approach to select the customized precision design point is to exhaustively compute the accuracy of each design with a large number of neural network inputs. This strategy requires substantial computational resources that are proportional to the size of the network and variety of output classifications. We describe our technique that significantly reduces the time required to search for the correct configuration in order to facilitate the use of customized precision.
The key insight behind our search method is that customized precision impacts the underlying internal computation, which is hidden by evaluating only the NN final accuracy metric. Thus, instead
of comparing the final accuracy generated by networks with different precision configurations, we compare the original NN activations to the customized precision activations. This circumvents the need to evaluate the large number of inputs required to produce representative neural network accuracy. Furthermore, instead of examining all of the activations, we only analyze the last layer, since the last layer captures the usable output from the neural network as well as the propagation of lost accuracy. Our method summarizes the differences between the last layer of two configurations by calculating the linear coefficient of determination between the last layer activations.
A method to translate the coefficient of determination to a more desirable metric, such as end-to-end inference accuracy, is necessary. We find that a linear model provides such a transformation. The customized precision setting with the highest speedup that meets a specified accuracy threshold is then selected. In order to account for slight inaccuracies in the model, inference accuracy for a subset of configurations is evaluated. If the configuration provided by the accuracy model results in insufficient accuracy, then an additional bit is added and the process repeats. Similarly, if the accuracy threshold is met, then a bit is removed from the customized precision format.
4 EXPERIMENTS
In this section, we evaluate five common neural networks spanning a range of sizes and depths in the context of customized precision hardware. We explore the trade-off between accuracy and efficiency when various customized precision representations are employed. Next, we address the sources of accuracy degradation when customized precision is utilized. Finally, we examine the characteristics of our customized precision search technique.
4.1 EXPERIMENTAL SETUP
We evaluate the accuracy of customized precision operations on five DNNs: GoogLeNet Szegedy et al. (2015), VGG Simonyan & Zisserman (2014), AlexNet Krizhevsky et al. (2012), CIFARNET Krizhevsky & Hinton (2009), and LeNet-5 LeCun et al. (1998). The implementations and pre-trained weights for these DNNs were taken from Caffe Jia et al. (2014). The three largest DNNs (GoogLeNet, VGG, and AlexNet) represent real-world workloads, while the two smaller DNNs (CIFARNET and LeNet-5) are the largest DNNs evaluated in prior work on customized precision. For each DNN, we use the canonical benchmark validation set: ImageNet for GoogLeNet, VGG, and AlexNet; CIFAR-10 for CIFARNET; MNIST for LeNet-5. We utilize the entire validation set for all experiments, except for GoogLeNet and VGG experiments involving the entire design space. In these cases we use a randomly-selected 1% of the validation set to make the experiments tractable.
4.2 ACCURACY VERSUS EFFICIENCY TRADE-OFFS
To evaluate the benefits of customized precision hardware, we swept the design space for accuracy and performance characteristics. This performance-accuracy trade off is shown in Figure 6. This figure shows the DNN inference accuracy across the full input set versus the speedup for each of the five DNN benchmarks. The black star represents the IEEE 754 single precision representation (i.e. the original accuracy with 1× speedup), while the red circles and blue triangles represent the complete set of our customized precision floating-point and fixed-point representations, respectively.
For GoogLeNet, VGG, and AlexNet it is clear that the floating-point format is superior to the fixedpoint format. In fact, the standard single precision floating-point format is faster than all fixedpoint configurations that achieve above 40% accuracy. Although fixed-point computation is simpler and faster than floating-point computation when the number of bits is fixed, customized precision floating-point representations are more efficient because less bits are needed for similar accuracy.
al
By comparing the results across the five different networks in Figure 6, it is apparent that the size and structure of the network impacts the customized precision flexibility of the network. This insight suggests that hardware designers should carefully consider which neural network(s) they expect their device to execute as one of the fundamental steps in the design process. The impact of network size on accuracy is discussed in further detail in the following section.
The specific impact of bit assignments on performance and energy efficiency are illustrated in Figure 7. This figure shows the the speedup and energy improvements over the single precision floatingpoint representation as the number of allocated bits is varied. For the floating-point representations, the number of bits allocated for the mantissa (x-axis) and exponent (y-axis) are varied. For the fixedpoint representations, the number of bits allocated for the integer (x-axis) and fraction (y-axis) are varied. We highlight a region in the plot deemed to have acceptable accuracy. In this case, we define acceptable accuracy to be 99% normalized AlexNet accuracy (i.e., no less than a 1% degradation in accuracy from the IEEE 754 single precision accuracy on classification in AlexNet).
The fastest and most energy efficient representation occurs at the bottom-left corner of the region with acceptable accuracy, since a minimal number of bits are used. The configuration with the highest performance that meets this requirement is a floating-point representation with 6 exponent bits and 7 mantissa bits, which yields a 7.2× speedup and a 3.4× savings in energy over the single precision IEEE 754 floating-point format. If a more stringent accuracy requirement is necessary, 0.3% accuracy degradation, the representation with one additional bit in the mantissa can be used, which achieves a 5.7× speedup and 3.0× energy savings.
4.3 SOURCES OF ACCUMULATION ERROR
In order to understand how customized precision degrades DNN accuracy among numeric representations, we examine the impact of various reduced precision computations on a neuron. Figure 8 presents the serialized accumulation of neuron inputs in the third convolution layer of AlexNet. The x-axis represents the number of inputs that have been accumulated, while the y-axis represents the current value of the running sum. The black line represents the original DNN computation, a baseline for customized precision settings to match. We find two causes of error between the customized precision fixed-point and floating-point representations, saturation and excessive rounding.
In the fixed-point case (green line, representing 16 bits with the radix point in the center), the central cause of error is from saturation at the extreme values. The running sum exceeds 255, the maximum representable value in this representation, after 60 inputs are accumulated, as seen in the figure.
After reaching saturation, the positive values are discarded and the final output is unpredictable. Although floating-point representations do not saturate as easily, the floating-point configuration with 10 mantissa bits and 4 exponent bits (orange line) saturates after accumulating 1128 inputs. Again, the lost information from saturation causes an unpredictable final output.
For the next case, the floating-point configuration with 2 bits and 14 bits for the mantissa and exponent (blue line), respectively, we find that the lack of precision for large values causes excessive rounding errors. As shown in the figure, after accumulating 120 inputs, this configuration’s running sum exceeds 256, which limits the minimum adjustment in magnitude to 64 (the exponent normalizes the mantissa to 256, so the two mantissa bits represent 128 and 64). Finally, one of the customized precision types that has high performance and accuracy for AlexNet, 8 mantissa bits and 6 exponent bits (red line), is shown as well. This configuration almost perfectly matches the IEEE 754 floating-point configuration, as expected based on the final output accuracy.
The other main cause of accuracy loss is from values that are too small to be encoded as a non-zero value in the chosen customized precision configuration. These values, although not critical during addition, cause significant problems when multiplied with a large value, since the output should be encoded as a non-zero value in the specific precision setting. We found that the weighted input is minimally impacted, until the precision is reduced low enough for the weight to become zero.
While it may be intuitive based on these results to apply different customized precision settings to various stages of the neural network in order to mitigate the sudden loss in accuracy, the realizable gains of multi-precision configurations present significant challenges. The variability between units will cause certain units to be unused during specific layers of the neural network causing gains to diminish (e.g., 11-bit units are idle when 16-bit units are required for a particular layer). Also, the application specific hardware design is already an extensive process and multiple customized precision configurations increases the difficulty of the hardware design and verification process.
4.4 CUSTOMIZED PRECISION SEARCH
Now we evaluate our proposed customized precision search method. The goal of this method is to significantly reduce the required time to navigate the customized precision design space and still provide an optimal design choice in terms of speedup, limited by an accuracy constraint.
Correlation model. First, we present the linear correlation-accuracy model in Figure 9, which shows the relationship between the normalized accuracy of each setting in the design space and the correlation between its last layer activations compared to those of the original NN. This model, although built using all of the customized precision configurations from AlexNet, CIFARNET, and LeNet5 neural networks, produces a good fit with a correlation of 0.96. It is important that the model matches across networks and precision design choices (e.g., floating point versus fixed point), since creating this model for each DNN, individually, requires as much time as exhaustive search.
Validation. To validate our search technique, Figure 10 presents the accuracy-speedup trade-off curves from our method compared to the ideal design points. We first obtain optimal results via
exhaustive search. We present our search with a variable number of refinement iterations, where we evaluate the accuracy of the current design point and adjust the precision if necessary. To verify robustness, the accuracy models were generated using cross-validation where all configurations in the DNN being searched are excluded (e.g., we build the AlexNet model with LeNet and CIFARNET accuracy/correlation pairs). The prediction is made using only ten randomly selected inputs, a tiny subset compared that needed for classification accuracy, some of which are even incorrectly classified by the original neural network. Thus, the cost of prediction using the model is negligible.
We observe that, in all cases, the accuracy model combined with the evaluation of just two customized precision configurations provides the same result as the exhaustive search. Evaluating two designs out of 340 is 170× faster than exhaustively evaluating all designs. When only one configuration is evaluated instead of two (i.e. a further 50% reduction is search time), the selected customized precision setting never violates the target accuracy, but concedes a small amount of performance. Finally, we note that our search mechanism, without evaluating inference accuracy for any of the design points, provides a representative prediction of the optimal customized precision setting. Although occasionally violating the target accuracy (i.e. the cases where the speedup is higher than the exhaustive search), this prediction can be used to gauge the amenability of the NN to customized precision without investing any considerable amount of time in experimentation.
Speedup. We present the final speedup produced by our search method in Figure 11 when the algorithm is configured for 99% target accuracy and to use two samples for refinement. In all cases, the chosen customized precision configuration meets the targeted accuracy constraint. In most cases, we find that the larger networks require more precision (DNNs are sorted from left to right in descending order based on size). VGG requires less precision than expected, but VGG also uses smaller convolution kernels than all of the other DNNs except LeNet-5.
5 RELATED WORK
To the best of our knowledge, our work is the first to examine the impact of numeric representations on the accuracy-efficiency trade-offs on large-scale, deployed DNNs with over half a million neurons (GoogLeNet, VGG, AlexNet), whereas prior work has only reported results on much smaller networks such as CIFARNET and LeNet-5 Cavigelli et al. (2015); Chen et al. (2014); Courbariaux et al. (2014); Du et al. (2014); Gupta et al. (2015); Muller & Indiveri (2015). Many of these works focused on fixed-point computation due to the fixed-point representation working well on small-scale neural networks. We find very different conclusions when considering production-ready DNNs.
Other recent works have looked at alternative neural network implementations such as spiking neural networks for more efficient hardware implementation Conti & Benini (2015); Diehl & Cook (2014). This is a very different computational model that requires redevelopment of standard DNNs, unlike our proposed methodologies. Other works have proposed several approaches to improve performance and reduce energy consumption of deep neural networks by taking advantage of the fact that DNNs usually contain redundancies Chen et al. (2015); Figurnov et al. (2015).
6 CONCLUSION
In this work, we introduced the importance of carefully considering customized precision when realizing neural networks. We show that using the IEEE 754 single precision floating point representation in hardware results in surrendering substantial performance. On the other hand, picking a configuration that has lower precision than optimal will result in severe accuracy loss. By reconsidering the representation from the ground up in designing custom precision hardware and using our search technique, we find an average speedup across deployable DNNs, including GoogLeNet and VGG, of 7.6× with less than 1% degradation in inference accuracy. | 1. How does the custom floating point representation support denormal numbers?
2. Are the custom floating point units clocked at the same frequency as the baseline 32-bit floating point unit? If not, how does the difference in frequency impact the system design?
3. How does the reduction in floating point unit energy consumption translate to total energy savings?
4. What are the implications of non-byte aligned memory accesses on the system design and performance? | Review | Review
This paper explores the performance-area-energy-model accuracy tradeoff encountered in designing custom number representations for deep learning inference. Common image-based benchmarks: VGG, Googlenet etc are used to demonstrate that fewer than1 6 bits in a custom floating point representation can lead to improvement in runtime performance and energy efficiency with only a small loss in model accuracy.
Questions:
1. Does the custom floating point number representation take into account support for de-normal numbers?
2. Is the custom floating point unit clocked at the same frequency as the baseline 32-bit floating point unit? If not, what are the different frequencies used and how would this impact the overall system design in terms of feeding the data to the floating point units from the memory
Comments:
1. I would recommend using the IEEE half-precision floating point (1bit sign, 5bit exponent, and 10bit mantissa) as a baseline for comparison. At this point, it is well known in both the ML and the HW communities that 32-bit floats are an overkill for DNN inference and major HW vendors already include support for IEEE half-precision floats.
2. In my opinion, the claim that switching to custom floating point lead to a YY.ZZ x savings in energy is misleading. It might be true that the floating-point unit itself might consume less energy due to smaller bit-width of the operands, however a large fraction of the total energy is spent in data movement to/from the memories. As a result, reducing the floating point unit’s energy consumption by a certain factor will not translate to the same reduction in the total energy. A reader not familiar with such nuances (for example a typical member of the ML community), may be mislead by such claims.
3. On a similar note as comment 2, the authors should explicitly mention that the claimed speedup is that of the floating point unit only, and it will not translate to the overall workload speedup. Although the speedup of the compute unit is roughly quadratic in the bit-width, the bandwidth requirements scale linearly with bit-width. As a result, it is possible that these custom floating point units may be starved on memory bandwidth, in which case the claims of speedup and energy savings need to be revisited.
4. The authors should also comment on the complexities and overheads introduced in data accesses, designing the various system buses/ data paths when the number representation is not byte-aligned. Moving to a custom 14-bit number representation (for example) can improve the performance and energy-efficiency of the floating point unit, but these gains can be partially eroded due to the additional overhead in supporting non-byte aligned memory accesses. |
ICLR | Title
Rethinking Numerical Representations for Deep Neural Networks
Abstract
With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency. In this work, we explore unconventional narrow-precision floating-point representations as it relates to inference accuracy and efficiency to steer the improved design of future DNN platforms. We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6× with less than 1% degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point. To facilitate the use of such customized precision, we also present a novel technique that drastically reduces the time required to derive the optimal precision configuration.
1 INTRODUCTION
Recently, deep neural networks (DNNs) have yielded state-of-the-art performance on a wide array of AI tasks, including image classification Krizhevsky et al. (2012), speech recognition Hannun et al. (2014), and language understanding Sutskever et al. (2014). In addition to algorithmic innovations Nair & Hinton (2010); Srivastava et al. (2014); Taigman et al. (2014), a key driver behind these successes are advances in computing infrastructure that enable large-scale deep learning—the training and inference of large DNN models on massive datasets Dean et al. (2012); Farabet et al. (2013). Indeed, highly efficient GPU implementations of DNNs played a key role in the first breakthrough of deep learning for image classification Krizhevsky et al. (2012). Given the ever growing amount of data available for indexing, analysis, and training, and the increasing prevalence of everlarger DNNs as key building blocks for AI applications, it is critical to design computing platforms to support faster, more resource-efficient DNN computation.
A set of core design decisions are common to the design of these infrastructures. One such critical choice is the numerical representation and precision used in the implementation of underlying storage and computation. Several recent works have investigated the numerical representation for DNNs Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014); Muller & Indiveri (2015). One recent work found that substantially lower precision can be used for training when the correct numerical rounding method is employed Gupta et al. (2015). Their work resulted in the design of a very energy-efficient DNN platform.
This work and other previous numerical representation studies for DNNs have either limited themselves to a small subset of the customized precision design space or drew conclusions using only small neural networks. For example, the work from Gupta et al. 2015 evaluates 16-bit fixed-point and wider computational precision on LeNet-5 LeCun et al. (1998) and CIFARNET Krizhevsky & Hinton (2009). The fixed-point representation (Figure 1) is only one of many possible numeric representations. Exploring a limited customized precision design space inevitably results in designs lacking in energy efficiency and computational performance. Evaluating customized precision accuracy based on small neural networks requires the assumption that much larger, production-grade neural networks would operate comparably when subjected to the same customized precision.
In this work, we explore the accuracy-efficiency trade-off made available via specialized customprecision hardware for inference and present a method to efficiently traverse this large design space to find an optimal design. Specifically, we evaluate the impact of a wide spectrum of customized
integer fraction
11001.01110 |||||||||| ......
Figure 1: A fixed-point representation. Hardware parameters include the total number of bits and the position of the radix point.
x2
mantissa
1.01101 ||||| ...
exponent
10011 ||||| ... - bias
Figure 2: A floating-point representation. Hardware parameters include the number of mantissa and exponent bits, and the bias.
precision settings for fixed-point and floating-point representations on accuracy and computational performance. We evaluate these customized precision configurations on large, state-of-the-art neural networks. By evaluating the full computational precision design space on a spectrum of these production-grade DNNs, we find that:
1. Precision requirements do not generalize across all neural networks. This prompts designers of future DNN infrastructures to carefully consider the applications that will be executed on their platforms, contrary to works that design for large networks and evaluate accuracy on small networks Cavigelli et al. (2015); Chen et al. (2014).
2. Many large-scale DNNs require considerably more precision for fixed-point arithmetic than previously found from small-scale evaluations Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014). For example, we find that GoogLeNet requires on the order of 40 bits when implemented with fixed-point arithmetic, as opposed to less than 16 bits for LeNet-5.
3. Floating-point representations are more efficient than fixed-point representations when selecting optimal precision settings. For example, a 17-bit floating-point representation is acceptable for GoogLeNet, while over 40 bits are required for the fixed-point representation – a more expensive computation than the standard single precision floating-point format. Current platform designers should reconsider the use of the floating-point representations for DNN computations instead of the commonly used fixed-point representations Cavigelli et al. (2015); Chen et al. (2014); Du et al. (2014); Muller & Indiveri (2015).
To make these conclusions on large-scale customized precision design readily actionable for DNN infrastructure designers, we propose and validate a novel technique to quickly search the large customized precision design space. This technique leverages the activations in the last layer to build a model to predict accuracy based on the insight that these activations effectively capture the propagation of numerical error from computation. Using this method on deployable DNNs, including GoogLeNet Szegedy et al. (2015) and VGG Simonyan & Zisserman (2014), we find that using these recommendations to introduce customized precision into a DNN accelerator fabric results in an average speedup of 7.6× with less than 1% degradation in inference accuracy.
2 CUSTOMIZED PRECISION HARDWARE
We begin with an overview of the available design choices in the representation of real numbers in binary and discuss how these choices impact hardware performance.
2.1 DESIGN SPACE
We consider three aspects of customized precision number representations. First, we contrast the high-level choice between fixed-point and floating-point representations. Fixed-point binary arithmetic is computationally identical to integer arithmetic, simply changing the interpretation of each bit position. Floating-point arithmetic, however, represents the sign, mantissa, and exponent of a real number separately. Floating-point calculations involve several steps absent in integer arithmetic. In particular, addition operations require aligning the mantissas of each operand. As a result, floatingpoint computation units are substantially larger, slower, and more complex than integer units. In CPUs and GPUs, available sizes for both integers and floating-point calculations are fixed according to the data types supported by the hardware. Thus, the second aspect of precision customization we examine is to consider customizing the number of bits used in representing floating-point and fixed-point numbers. Third, we may vary the interpretation of fixed-point numbers and assignment of bits to the mantissa and exponent in a floating-point value.
2.2 CUSTOMIZED PRECISION TYPES
In a fixed-point representation, we select the number of bits as well as the position of the radix point, which separates integer and fractional bits, as illustrated in Figure 1. A bit array, x, encoded in fixed point with the radix point at bit l (counting from the right) represents the value 2−l ∑N−1 i=0 2 i · xi.
In contrast to floating point, fixed-point representations with a particular number of bits have a fixed level of precision. By varying the position of the radix point, we change the representable range.
An example floating-point representation is depicted in Figure 2. As shown in the figure, there are three parameters to select when designing a floating-point representation: the bit-width of the mantissa, the bit-width of the exponent, and an exponent bias. The widths of the mantissa and exponent control precision and dynamic range, respectively. The exponent bias adjusts the offset of the exponent (which is itself represented as an unsigned integer) relative to zero to facilitate positive and negative exponents. Finally, an additional bit represents the sign. Thus, a floating-point format with Nm mantissa bits, Ne exponent bits, and a bias of b, encodes the value
2( ∑Ne−1 i=0 2i·ei)−b(1 + ∑Nm i=1 2
−i ·mi), where m and e are the segments of a bit array representing the mantissa and exponent, respectively. Note that the leading bit of the mantissa is assumed to be 1 and hence is not explicitly stored, eliminating redundant encodings of the same value. A singleprecision value in the IEEE-754 standard (i.e. float) comprises 23 mantissa bits, 8 exponent bits, and a sign bit. IEEE-754 standardized floating-point formats include special encodings for specific values, such as zero and infinity. Both fixed-point and floating-point representations have limitations in terms of the precision and the dynamic ranges available given particular representations, manifesting themselves computationally as rounding and saturation errors. These errors propagate through the deep neural network in a way that is difficult to estimate holistically, prompting experimentation on the DNN itself.
2.3 HARDWARE IMPLICATIONS
The key hardware building block for implementing DNNs is the multiply-accumulate (MAC) operation. The MAC operation implements the sum-of-products operation that is fundamental to the activation of each neuron. We show a high-level hardware block diagram of a MAC unit in Figure 3 (a). Figure 3 (b) adds detail for the addition operation, the more complex of the two operations. As seen in the figure, floating-point addition operations involve a number of sub-components that compare exponents, align mantissas, perform the addition, and normalize the result. Nearly all of the sub-components of the MAC unit scale in speed, power, and area with the bit width.
Reducing the floating-point bit width improves hardware performance in two ways. First, reduced bit width makes a computation unit faster. Binary arithmetic computations involve chains of logic operations that typically grows at least logarithmically, and sometimes linearly (e.g., the propagation of carries in an addition, see Figure 3 (c)), in the number of bits. Reducing the bit width reduces the length of these chains, allowing the logic to operate at a higher clock frequency. Second, reduced bit width makes a computation unit smaller and require less energy, typically linearly in the number of bits. The circuit delay and area is shown in Figure 4 when the mantissa bit widths are varied. As shown in the figure, scaling the length of the mantissa provides substantial opportunity because it defines the size of the internal addition unit. Similar trends follow for bit-widths in other representations. When a unit is smaller, more replicas can fit within the same chip area and power budget, all of which can operate in parallel. Hence, for computations like those in DNNs, where ample parallelism is available, area reductions translate into proportional performance improvement.
This trend of bit width versus speed, power, and area is applicable to every computation unit in hardware DNN implementations. Thus, in designing hardware that uses customized representations
there is a trade-off between accuracy on the one hand and power, area, and speed on the other. Our goal is to use precision that delivers sufficient accuracy while attaining large improvements in power, area, and speed over standard floating-point designs.
3 METHODOLOGY
We describe the methodology we use to evaluate the customized precision design space, using image classification tasks of varying complexity as a proxy for computer vision applications. We evaluate DNN implementations using several metrics, classification accuracy, speedup, and energy savings relative to a baseline custom hardware design that uses single-precision floating-point representations. Using the results of this analysis, we propose and validate a search technique to efficiently determine the correct customized precision design point.
3.1 ACCURACY
We evaluate accuracy by modifying the Caffe Jia et al. (2014) deep learning framework to perform calculations with arbitrary fixed-point and floating-point formats. We continue to store values as C floats in Caffe, but truncate the mantissa and exponent to the desired format after each arithmetic operation. Accuracy, using a set of test inputs disjoint from the training input set, is then measured by running the forward pass of a DNN model with the customized format and comparing the outputs with the ground truth. We use the standard accuracy metrics that accompany the dataset for each DNN. For MNIST (LeNet-5) and CIFAR-10 (CIFARNET) we use top-1 accuracy and for ImageNet (GoogLeNet, VGG, and AlexNet) we use top-5 accuracy. Top-1 accuracy denotes the percent of inputs that the DNN predicts correctly after a single prediction attempt, while top-5 accuracy represents the percent of inputs that DNN predicts correctly after five attempts.
3.2 EFFICIENCY
We quantify the efficiency advantages of customized floating-point representations by designing a floating-point MAC unit in each candidate precision and determining its silicon area and delay characteristics. We then report speedup and energy savings relative to a baseline custom hardware implementation of a DNN that uses standard single-precision floating-point computations. We design each variant of the MAC unit using Synopsys Design Compiler and Synopsys PrimeTime, industry standard ASIC design tools, targeting a commercial 28nm silicon manufacturing process. The tools report the power, delay, and area characteristics of each precision variant. As shown in Figure 5, we compute speedups and energy savings relative to the standardized IEEE-754 floating-point representation considering both the clock frequency advantage and improved parallelism due to area reduction of the narrower bit-width MAC units. This allows customized precision designs to yield a quadratic improvement in total system throughput.
3.3 EFFICIENT CUSTOMIZED PRECISION SEARCH
To exploit the benefits of customized precision, a mechanism to select the correct configuration must be introduced. There are hundreds of designs among floating-point and fixed-point formats due to designs varying by the total bit width and the allocation of those bits. This spectrum of designs strains the ability to select an optimal configuration. A straightforward approach to select the customized precision design point is to exhaustively compute the accuracy of each design with a large number of neural network inputs. This strategy requires substantial computational resources that are proportional to the size of the network and variety of output classifications. We describe our technique that significantly reduces the time required to search for the correct configuration in order to facilitate the use of customized precision.
The key insight behind our search method is that customized precision impacts the underlying internal computation, which is hidden by evaluating only the NN final accuracy metric. Thus, instead
of comparing the final accuracy generated by networks with different precision configurations, we compare the original NN activations to the customized precision activations. This circumvents the need to evaluate the large number of inputs required to produce representative neural network accuracy. Furthermore, instead of examining all of the activations, we only analyze the last layer, since the last layer captures the usable output from the neural network as well as the propagation of lost accuracy. Our method summarizes the differences between the last layer of two configurations by calculating the linear coefficient of determination between the last layer activations.
A method to translate the coefficient of determination to a more desirable metric, such as end-to-end inference accuracy, is necessary. We find that a linear model provides such a transformation. The customized precision setting with the highest speedup that meets a specified accuracy threshold is then selected. In order to account for slight inaccuracies in the model, inference accuracy for a subset of configurations is evaluated. If the configuration provided by the accuracy model results in insufficient accuracy, then an additional bit is added and the process repeats. Similarly, if the accuracy threshold is met, then a bit is removed from the customized precision format.
4 EXPERIMENTS
In this section, we evaluate five common neural networks spanning a range of sizes and depths in the context of customized precision hardware. We explore the trade-off between accuracy and efficiency when various customized precision representations are employed. Next, we address the sources of accuracy degradation when customized precision is utilized. Finally, we examine the characteristics of our customized precision search technique.
4.1 EXPERIMENTAL SETUP
We evaluate the accuracy of customized precision operations on five DNNs: GoogLeNet Szegedy et al. (2015), VGG Simonyan & Zisserman (2014), AlexNet Krizhevsky et al. (2012), CIFARNET Krizhevsky & Hinton (2009), and LeNet-5 LeCun et al. (1998). The implementations and pre-trained weights for these DNNs were taken from Caffe Jia et al. (2014). The three largest DNNs (GoogLeNet, VGG, and AlexNet) represent real-world workloads, while the two smaller DNNs (CIFARNET and LeNet-5) are the largest DNNs evaluated in prior work on customized precision. For each DNN, we use the canonical benchmark validation set: ImageNet for GoogLeNet, VGG, and AlexNet; CIFAR-10 for CIFARNET; MNIST for LeNet-5. We utilize the entire validation set for all experiments, except for GoogLeNet and VGG experiments involving the entire design space. In these cases we use a randomly-selected 1% of the validation set to make the experiments tractable.
4.2 ACCURACY VERSUS EFFICIENCY TRADE-OFFS
To evaluate the benefits of customized precision hardware, we swept the design space for accuracy and performance characteristics. This performance-accuracy trade off is shown in Figure 6. This figure shows the DNN inference accuracy across the full input set versus the speedup for each of the five DNN benchmarks. The black star represents the IEEE 754 single precision representation (i.e. the original accuracy with 1× speedup), while the red circles and blue triangles represent the complete set of our customized precision floating-point and fixed-point representations, respectively.
For GoogLeNet, VGG, and AlexNet it is clear that the floating-point format is superior to the fixedpoint format. In fact, the standard single precision floating-point format is faster than all fixedpoint configurations that achieve above 40% accuracy. Although fixed-point computation is simpler and faster than floating-point computation when the number of bits is fixed, customized precision floating-point representations are more efficient because less bits are needed for similar accuracy.
al
By comparing the results across the five different networks in Figure 6, it is apparent that the size and structure of the network impacts the customized precision flexibility of the network. This insight suggests that hardware designers should carefully consider which neural network(s) they expect their device to execute as one of the fundamental steps in the design process. The impact of network size on accuracy is discussed in further detail in the following section.
The specific impact of bit assignments on performance and energy efficiency are illustrated in Figure 7. This figure shows the the speedup and energy improvements over the single precision floatingpoint representation as the number of allocated bits is varied. For the floating-point representations, the number of bits allocated for the mantissa (x-axis) and exponent (y-axis) are varied. For the fixedpoint representations, the number of bits allocated for the integer (x-axis) and fraction (y-axis) are varied. We highlight a region in the plot deemed to have acceptable accuracy. In this case, we define acceptable accuracy to be 99% normalized AlexNet accuracy (i.e., no less than a 1% degradation in accuracy from the IEEE 754 single precision accuracy on classification in AlexNet).
The fastest and most energy efficient representation occurs at the bottom-left corner of the region with acceptable accuracy, since a minimal number of bits are used. The configuration with the highest performance that meets this requirement is a floating-point representation with 6 exponent bits and 7 mantissa bits, which yields a 7.2× speedup and a 3.4× savings in energy over the single precision IEEE 754 floating-point format. If a more stringent accuracy requirement is necessary, 0.3% accuracy degradation, the representation with one additional bit in the mantissa can be used, which achieves a 5.7× speedup and 3.0× energy savings.
4.3 SOURCES OF ACCUMULATION ERROR
In order to understand how customized precision degrades DNN accuracy among numeric representations, we examine the impact of various reduced precision computations on a neuron. Figure 8 presents the serialized accumulation of neuron inputs in the third convolution layer of AlexNet. The x-axis represents the number of inputs that have been accumulated, while the y-axis represents the current value of the running sum. The black line represents the original DNN computation, a baseline for customized precision settings to match. We find two causes of error between the customized precision fixed-point and floating-point representations, saturation and excessive rounding.
In the fixed-point case (green line, representing 16 bits with the radix point in the center), the central cause of error is from saturation at the extreme values. The running sum exceeds 255, the maximum representable value in this representation, after 60 inputs are accumulated, as seen in the figure.
After reaching saturation, the positive values are discarded and the final output is unpredictable. Although floating-point representations do not saturate as easily, the floating-point configuration with 10 mantissa bits and 4 exponent bits (orange line) saturates after accumulating 1128 inputs. Again, the lost information from saturation causes an unpredictable final output.
For the next case, the floating-point configuration with 2 bits and 14 bits for the mantissa and exponent (blue line), respectively, we find that the lack of precision for large values causes excessive rounding errors. As shown in the figure, after accumulating 120 inputs, this configuration’s running sum exceeds 256, which limits the minimum adjustment in magnitude to 64 (the exponent normalizes the mantissa to 256, so the two mantissa bits represent 128 and 64). Finally, one of the customized precision types that has high performance and accuracy for AlexNet, 8 mantissa bits and 6 exponent bits (red line), is shown as well. This configuration almost perfectly matches the IEEE 754 floating-point configuration, as expected based on the final output accuracy.
The other main cause of accuracy loss is from values that are too small to be encoded as a non-zero value in the chosen customized precision configuration. These values, although not critical during addition, cause significant problems when multiplied with a large value, since the output should be encoded as a non-zero value in the specific precision setting. We found that the weighted input is minimally impacted, until the precision is reduced low enough for the weight to become zero.
While it may be intuitive based on these results to apply different customized precision settings to various stages of the neural network in order to mitigate the sudden loss in accuracy, the realizable gains of multi-precision configurations present significant challenges. The variability between units will cause certain units to be unused during specific layers of the neural network causing gains to diminish (e.g., 11-bit units are idle when 16-bit units are required for a particular layer). Also, the application specific hardware design is already an extensive process and multiple customized precision configurations increases the difficulty of the hardware design and verification process.
4.4 CUSTOMIZED PRECISION SEARCH
Now we evaluate our proposed customized precision search method. The goal of this method is to significantly reduce the required time to navigate the customized precision design space and still provide an optimal design choice in terms of speedup, limited by an accuracy constraint.
Correlation model. First, we present the linear correlation-accuracy model in Figure 9, which shows the relationship between the normalized accuracy of each setting in the design space and the correlation between its last layer activations compared to those of the original NN. This model, although built using all of the customized precision configurations from AlexNet, CIFARNET, and LeNet5 neural networks, produces a good fit with a correlation of 0.96. It is important that the model matches across networks and precision design choices (e.g., floating point versus fixed point), since creating this model for each DNN, individually, requires as much time as exhaustive search.
Validation. To validate our search technique, Figure 10 presents the accuracy-speedup trade-off curves from our method compared to the ideal design points. We first obtain optimal results via
exhaustive search. We present our search with a variable number of refinement iterations, where we evaluate the accuracy of the current design point and adjust the precision if necessary. To verify robustness, the accuracy models were generated using cross-validation where all configurations in the DNN being searched are excluded (e.g., we build the AlexNet model with LeNet and CIFARNET accuracy/correlation pairs). The prediction is made using only ten randomly selected inputs, a tiny subset compared that needed for classification accuracy, some of which are even incorrectly classified by the original neural network. Thus, the cost of prediction using the model is negligible.
We observe that, in all cases, the accuracy model combined with the evaluation of just two customized precision configurations provides the same result as the exhaustive search. Evaluating two designs out of 340 is 170× faster than exhaustively evaluating all designs. When only one configuration is evaluated instead of two (i.e. a further 50% reduction is search time), the selected customized precision setting never violates the target accuracy, but concedes a small amount of performance. Finally, we note that our search mechanism, without evaluating inference accuracy for any of the design points, provides a representative prediction of the optimal customized precision setting. Although occasionally violating the target accuracy (i.e. the cases where the speedup is higher than the exhaustive search), this prediction can be used to gauge the amenability of the NN to customized precision without investing any considerable amount of time in experimentation.
Speedup. We present the final speedup produced by our search method in Figure 11 when the algorithm is configured for 99% target accuracy and to use two samples for refinement. In all cases, the chosen customized precision configuration meets the targeted accuracy constraint. In most cases, we find that the larger networks require more precision (DNNs are sorted from left to right in descending order based on size). VGG requires less precision than expected, but VGG also uses smaller convolution kernels than all of the other DNNs except LeNet-5.
5 RELATED WORK
To the best of our knowledge, our work is the first to examine the impact of numeric representations on the accuracy-efficiency trade-offs on large-scale, deployed DNNs with over half a million neurons (GoogLeNet, VGG, AlexNet), whereas prior work has only reported results on much smaller networks such as CIFARNET and LeNet-5 Cavigelli et al. (2015); Chen et al. (2014); Courbariaux et al. (2014); Du et al. (2014); Gupta et al. (2015); Muller & Indiveri (2015). Many of these works focused on fixed-point computation due to the fixed-point representation working well on small-scale neural networks. We find very different conclusions when considering production-ready DNNs.
Other recent works have looked at alternative neural network implementations such as spiking neural networks for more efficient hardware implementation Conti & Benini (2015); Diehl & Cook (2014). This is a very different computational model that requires redevelopment of standard DNNs, unlike our proposed methodologies. Other works have proposed several approaches to improve performance and reduce energy consumption of deep neural networks by taking advantage of the fact that DNNs usually contain redundancies Chen et al. (2015); Figurnov et al. (2015).
6 CONCLUSION
In this work, we introduced the importance of carefully considering customized precision when realizing neural networks. We show that using the IEEE 754 single precision floating point representation in hardware results in surrendering substantial performance. On the other hand, picking a configuration that has lower precision than optimal will result in severe accuracy loss. By reconsidering the representation from the ground up in designing custom precision hardware and using our search technique, we find an average speedup across deployable DNNs, including GoogLeNet and VGG, of 7.6× with less than 1% degradation in inference accuracy. | 1. What is the focus of the paper regarding large convolutional networks?
2. What are the strengths and weaknesses of the proposed approach in terms of speed-ups and precision?
3. How does the reviewer assess the significance and applicability of the results for hardware manufacturers?
4. Are there any concerns or suggestions regarding the evaluation methodology and the impact of batch normalization?
5. How does the reviewer perceive the overall novelty and relevance of the paper's content? | Review | Review
The paper provides a first study of customized precision hardware for large convolutional networks, namely alexnet, vgg and googlenet. It shows that it is possible to achieve larger speed-ups using floating-point precision (up to 7x) when using fewer bits, and better than using fixed-point representations.
The paper also explores predicting custom floating-point precision parameters directly from the neural network activations, avoiding exhaustive search, but i could not follow this part. Only the activations of the last layer are evaluated, but on what data ? On all the validation set ? Why would this be faster than computing the classification accuracy ?
The results should be useful for hardware manufacturers, but with a catch. All popular convolutional networks now use batch normalization, while none of the evaluated ones do. It may well be that the conclusions of this study will be completely different on batch normalization networks, and fixed-point representations are best there, but that remains to be seen. It seems like something worth exploring.
Overall there is not a great deal of novelty other than being a useful study on numerical precision trade-offs at neural network test time. Training time is also something of interest. There are a lot more researchers trying to train new networks fast than trying to evaluate old ones fast.
I am also no expert in digital logic design, but my educated guess is that this paper is marginally below the acceptance threshold. |
ICLR | Title
Multimodal Open-Vocabulary Video Classification via Vision and Language Models
Abstract
Utilizing vision and language models (VLMs) pre-trained on internet-scale imagetext pairs is becoming a promising paradigm for open-vocabulary vision tasks. This work conducts an extensive study for multimodal open-vocabulary video classification via pre-trained VLMs by leveraging motion and audio that naturally exist in the video. We design an asymmetrical cross-modal fusion mechanism to aggregate multimodal information differently for video and optical flow / audio. Experiments on Kinetics and VGGSound show that introducing more modalities significantly improves the accuracy on seen classes, while generalizing better to unseen classes over existing approaches. Despite its simplicity, our method achieves state-of-the-art results on UCF and HMDB zero-shot video action recognition benchmarks, significantly outperforming traditional zero-shot techniques, video-text pre-training methods and recent VLM-based approaches. Code and models will be released.
1 INTRODUCTION
Building open-vocabulary models capable of predicting beyond a fixed set of training classes is of crucial importance in computer vision. Recently, vision and language models (VLMs) pre-trained on internet-scale image-text pairs, e.g., CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021), demonstrate impressive transferability on a wide range of vision tasks. Utilizing strong pre-trained VLMs is becoming a promising paradigm for open-vocabulary vision tasks including object detection (Gu et al., 2022) and image segmentation (Ghiasi et al., 2021; Li et al., 2022a).
In this work, we focus on the novel challenging task of multimodal open-vocabulary video classification via pre-trained VLMs. We set up open-vocabulary video benchmarks by utilizing two existing large-scale multimodal video datasets: Kinetics-700 (Carreira et al., 2019) and VGGSound (Chen et al., 2020). Specifically, we constructed two sets of classes: base (seen) and novel (unseen). For base classes, we have both training and testing videos, aiming at helping the pre-trained VLMs adapt to the video domain. While for novel classes, we only have testing videos, mimicking the real-world challenge of open-vocabulary video classification. To the best of our knowledge, we are the first to study how to leverage pre-trained VLMs for multimodal open-vocabulary video classification.
We start with directly fine-tuning the vision encoder of CLIP (Radford et al., 2021) with the language encoder fixed, using the training videos from base classes. As shown in Fig. 1 (a-d), although there is a decent performance gain for base classes, the accuracy for novel classes decreases significantly. This observation corroborates with Zhou et al. (2022) on adapting pre-trained VLMs.
On the other hand, despite rich multimodal contents in internet videos, signals such as audio and motion are less explored in recent open-vocabulary models. This is in stark contrast to the human perception system that heavily relies on multimodal signals (Smith & Gasser, 2005). Can we leverage multimodal information to improve open-vocabulary models?
Instead of using specially designed modality-specific encoders (Wang et al., 2016; Hershey et al., 2017), we choose a more straightforward path by directly utilizing the pre-trained vision encoder from VLMs with minimal modifications to deal with optical flow and audio spectrogram.
We then conduct the same experiments by fine-tuning CLIP’s vision encoder but instead using flow or audio as the input. As shown in Fig. 1 (e-h), surprisingly, we find that fine-tuning on base classes
0 25 50 75 100
50
55
60
65 70 A cc ur ac y (% )
(a) BASE
0 25 50 75 100 26
28
30
32
34
(b) NOVEL
0 25 50 75 100
50
52
54
56
58
60
(c) BASE
0 25 50 75 100
12
14
16
18
20
22
24
26
28
30 (d) NOVEL
0 25 50 75 100 0
6
12
18
24
30
36
42
48
54
(e) BASE
0 25 50 75 100
2
4
6
8
10
12
14
16
(f) NOVEL
0 25 50 75 100 52
53
54
55
56
57
58
59
(g) BASE
0 25 50 75 100
8
10
12
14
16
(h) NOVEL
Training Epochs
Kinetics VIDEO VGGSound VIDEO VGGSound AUDIOKinetics FLOW
Figure 1: Fine-tuning pre-trained CLIP with video, flow and audio modalities. For all three modalities, fine-tuning on labeled base classes leads to significant accuracy improvement (a, c, e, g). However, when evaluating the same model on novel classes, the video modality shows decreasing performance (b, d), while the performance for both flow and audio modality is improving (f, h).
is able to also improve the performance on novel classes. This suggests that we may use flow or audio modality to improve the base to novel generalization of video modality.
In light of these observations, we propose MOV, a simple yet effective method for Multimodal Open-Vocabulary video classification. Fig. 2 shows an overview of our method. In MOV, we design a novel asymmetrical cross-modal fusion mechanism using cross-attention to leverage complementary multimodal information differently for video and optical flow / audio. The core idea is to exploit the strong transferability in the pre-trained vision encoder, while allowing greater flexibility in finetuning flow and audio encoders. MOV is trained using multimodal inputs from base classes and is able to predict both base and novel classes during inference.
We carry out extensive experiments and ablation studies on Kinetics-700 (Carreira et al., 2019) and VGGSound (Chen et al., 2020). MOV shows clear improvements over CLIP (Radford et al., 2021), recent CLIP adaptation techniques (Zhou et al., 2021; Gao et al., 2021), as well as videotext pre-training methods (Akbari et al., 2021) on both base and novel classes. MOV also achieves state-of-the-art results on UCF and HMDB zero-shot video action recognition benchmarks, significantly outperforming traditional zero-shot methods, state-of-the-art VLM adaption techniques, and a variety of video-text pre-training approaches. Furthermore, MOV is scalable with much stronger backbones, indicating its potential to be incorporated with large vision and language models.
2 RELATED WORK
Vision and language models. Learning a joint embedding space from vision and language modalities has been extensively studied during the past decade. Early works usually first encode two modalities separately, using hand-crafted descriptors (Elhoseiny et al., 2013) or deep networks (Lei Ba et al., 2015) for images, and skip-gram text models for languages (Frome et al., 2013). The crossmodality alignment is then achieved by metric learning (Frome et al., 2013) or language concepts (Li et al., 2017). Recently, learning vision and language modalities jointly through contrastive learning (Hadsell et al., 2006; Oord et al., 2018) becomes a promising direction. Impressive performance has been achieved by utilizing stronger encoders for vision (Dosovitskiy et al., 2021), language (Vaswani et al., 2017) and web-scale pre-training data (Hinton et al., 2015; Radford et al., 2021). CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) are two representative approaches which show strong zero-shot 1 performance on various downstream tasks. Despite this strong baseline, adapting pre-trained VLMs to specific vision domains in a more effective way remains critical and is being actively studied. Examples abound, including image classification (Zhou et al., 2021; 2022; Gao et al., 2021), object detection (Gu et al., 2022; Zhong et al., 2022; Kamath et al., 2021; Li et al., 2022b), image segmentation (Ghiasi et al., 2021; Li et al., 2022a), audio classification (Guzhov et al., 2022) and video action recognition (Wang et al., 2021; Ju et al., 2021; Ni et al., 2022). Our method extends the existing research by adapting pre-trained VLMs to multimodal video and investigating the impact of additional input modalities like flow and audio.
1We use the term “zero-shot” when we need to align with settings described in some existing works. Otherwise, we would use “open-vocabulary” which we believe is a more precise term.
Open-vocabulary video classification. Zero-shot or open-vocabulary video action recognition is a representative task in this domain. Similar to early works of vision and language learning, the video input and labeled texts are encoded with modality-specific pre-trained models such as S3D (Xie et al., 2018), R(2+1)D (Tran et al., 2018) for video, and Word2Vec (Mikolov et al., 2013) for text. Since the generated video and text embeddings are not aligned, various methods have been proposed to bridge the gap by mapping two modalities into a joint embedding space (Wang & Chen, 2017; Chen & Huang, 2021; Gao et al., 2019; Wu et al., 2016; Xu et al., 2016; Zhu et al., 2018), mapping vision modality to language space (Bishay et al., 2019; Brattoli et al., 2020; Hahn et al., 2019; Xu et al., 2017) or mapping language modality to vision space (Mandal et al., 2019; Zhang & Peng, 2018). These joint embedding mapping methods are further extended to audiovisual classification (Mercea et al., 2022; Mazumder et al., 2021; Parida et al., 2020). Our approach shows that we can significantly improve the performance of open-vocabulary video classification by leveraging strong pre-trained VLMs and other modalities like flow and audio. To our knowledge, this has not been done by prior works in this field.
Mutlimodal fusion for video. Videos are a natural source of multimodal data including motion and audio. Two-stream networks is used to model video and optical flow simultaneously for action recognition (Simonyan & Zisserman, 2014; Wang et al., 2016; Feichtenhofer et al., 2016; 2017). Late fusion is adopted (Simonyan & Zisserman, 2014; Wang et al., 2016) and then thoroughly studied (Feichtenhofer et al., 2016; 2017) on how to better perform spatio-temporal fusion from two modalities. As in the domain of audiovisual fusion, early methods (Chen & Rao, 1998) usually adopt straightforward score fusion or stacking input data for early fusion. Later research (Kazakos et al., 2019; Xiao et al., 2020; Fayek & Kumar, 2020; Nagrani et al., 2021; Chen & Ho, 2022; Chen et al., 2021; Zhao et al., 2022) focus on developing better mid or late fusion strategies to improve the final performance. Different from existing works focusing on a fixed set of classes, we use multimodal fusion to help open-vocabulary video models generalize better to novel classes.
3 METHOD
An overview of our proposed method is shown in Fig. 2. We next describe each component.
3.1 MODALITY-SPECIFIC ENCODING
Given a pre-trained vision and language model, e.g., CLIP (Radford et al., 2021), we denote its vision encoder as h(·|θh) and its language encoder as g(·|θg). For a multimodal video input, we sample N RGB frames V and calculate the corresponding optical flow images F , resulting in V = {v1, v2, . . . , vN} and F = {f1, f2, . . . , fN}. We also generate the spectrogram image A from the raw audio waveform. More implementation details can be found in Sec. 4. We use the same encoder architecture h(·|·) to extract feature representations for video, flow and audio modalities, denoted as hv(·|θv), hf (·|θf ), and ha(·|θa) respectively. Model parameters θv , θf and θa are all initialized with the weight θh from the pre-trained VLM’s vision encoder. Apart from being simple and easy to implement, this design has two additional advantages: 1) the performance of adopting the pre-trained VLM’s vision encoder to other modalities is competitive against in-domain methods (a detailed study in Appendix A); 2) the vision encoder is trained to align with the language encoder, potentially helping the generalization from base to novel classes. We encode each modality separately as:
v = hv(V |θv), f = hf (F |θf ), a = ha(A|θa), (1)
where v and f are features fromN frames, and a is the representation of a single spectrogram image.
To better aggregate the temporal features of video and flow modalities, we attach temporal fusion networks φv(·) and φf (·), consisting of L transformer layers each, on top of hv(·|θv) and hf (·|θf ). We denote the input of the l-th transformer layer as zl and the input z0 can be either v or f . Then the forward pass of the l-th layer in φv(·) and φf (·) can be formulated as:
yl = MSA(LN(zl)) + zl, (2)
zl+1 = MLP(LN(yl)) + yl, (3)
where LN stands for layer normalization, MSA represents multi-head self-attention, and MLP means multi-layer perceptron. For audio feature a, we simply attach an MLP module upon the backbone. We obtain the temporally fused features as:
vt = φv(v), ft = φf (f), at = MLP(a). (4)
Finally, for the text modality, suppose we have p base classes with labels. We fill each of the class names into 28 video classification prompts provided by CLIP (Radford et al., 2021) like “a video of a person doing {class name}” and then encode the sentence using the pre-trained language encoder g(·|θg) from VLM. The embedding of each class is averaged over all templates denoted as {Bi}pi=1.
3.2 ASYMMETRICAL MULTIMODAL FUSION
We adopt an asymmetrical cross-attention mechanism to fuse mutlimodal features. Without loss of generality, as shown in Fig. 2, our method described here is for fusing one of {flow, audio}modality with video modality. The algorithm can be easily extended to fusing video with more modalities.
For the video modality, we extract the information from other modalities to enhance the performance of video feature. Thus we use vt as the input for attention query, and ft or at from the other modality as the input for attention key and value. The fused multimodal video feature vm can be written as:
vt = MCA(LN(vt),LN(xt)) + vt, xt ∈ {ft,at}, (5) vm = AvgPool ( MLP(LN(vt)) + vt ) , (6)
where MCA denotes multi-head cross-attention, AvgPool denotes temporal average pooling.
For the audio and flow modalities, we adopt an asymmetrical design aiming at incorporating the information from video modality to enhance the generalization ability of the feature to novel classes. Since the video temporal fusion network φv(·) for generating the video feature vt are trained on base classes, vt losses the generalization ability to novel classes (shown in Fig. 1). Therefore we choose to directly use the frozen vision encoder’s output v instead of vt for better generalization to novel classes. We obtain the fused multimodal flow and audio feature fm and am as:
ft = MCA(LN(ft),LN(v)) + ft, at = MCA(LN(at),LN(v)) + at, (7) fm = AvgPool ( MLP(LN(ft)) + ft ) , am = AvgPool ( MLP(LN(at)) + at ) . (8)
3.3 TRAINING AND INFERENCE ON BASE CLASSES
During training, each input multimodal video has a corresponding label y belonging to the base classes. We would optimize different modalities simultaneously via maximizing the video-text, flow-text and audio-text similarity. The training loss function can be formulated as:
L = α(− log exp(sim(vm,By)/τ)∑p i=1 exp(sim(vm,Bi)/τ) ) + (1− α)(− log exp(sim(xm,By)/τ)∑p i=1 exp(sim(xm,Bi)/τ) ), (9)
where xm ∈ {fm,am} is the final fused flow or audio feature, α is the weight for balancing two loss terms, sim(·, ·) is the cosine similarity, τ is a pre-defined temperature parameter. During training, we freeze the video encoder and the text encoder to retain their strong generalization to novel classes and save computation, while for other two modalities flow and audio, we fine-tune the encoder endto-end. An ablation study on fine-tuning different number of layers can be found in Tab. 6.
For inference on base classes, we compute the probability belonging to the j-th class by:
P (j) = exp(sim(vm,Bj)/τ)∑p i=1 exp(sim(vm,Bi)/τ) , j ∈ {1, 2, . . . , p}. (10)
3.4 GENERALIZATION TO NOVEL CLASSES
Similar to base classes, we obtain the text embeddings for novel classes as {Ni}qi=1, where q is the number of novel classes. In addition to fused features fm or am, we also incorporate the video feature v extracted from the frozen video encoder, followed by a temporal average pooling. Similar to Eq. 10, we compute the probability predictions as (here we only show flow modality for simplicity): Pf (j) = exp(sim(fm,Nj)/τf )∑q i=1 exp(sim(fm,Ni)/τf ) , Pv(j) = exp(sim(v,Nj)/(τv)∑q i=1 exp(sim(v,Ni)/(τv)) , j ∈ {1, 2, . . . , q}. (11) We denote the probability distribution followed by {pf (j)|qj=1} and {pv(j)| q j=1} as Df and Dv . In our experiments we find the curve of Dv tends to be much flatter (or have higher information entropy) than Df when the temperatures τv and τf are both set to the CLIP’s default value of 0.01, resulting in poor performance. We find simply lowering τv to 0.003 while keeping τf and τa as 0.01 solves this issue. A detailed ablation study about the temperature can be found in Appendix B.
The final probability predictions for novel classes are calculated by a weighted sum:
P (j) = βPf (j) + (1− β)Pv(j). (12)
4 EXPERIMENTS
4.1 DATASETS
We describe the details of dataset splits for benchmarking multimodal open-vocabulary video classification and preparing flow and audio modalities.
Kinetics-700 (Carreira et al., 2019) splits. Kinetics-700 contains around 650k video clips annotated with 700 human action classes. Apart from the visual appearance, motion plays an important role for distinguishing different action classes. For dataset split, we randomly select 400 classes as base classes and the testing videos of the rest 300 classes are used for novel class evaluation.
Kinetics-700 optical flow. We follow a standard procedure (Xie et al., 2018; Han et al., 2020a;b) to use the TV-L1 algorithm (Zach et al., 2007) to extract optical flow in an unsupervised manner. To accommodate for pre-trained vision encoders, we first truncate the vertical and horizontal motion values to [−20, 20], then append a third all-zero channel. Finally we do a shift and scale transformation to map [−20, 20] to [0, 255]. VGGSound (Chen et al., 2020) splits. VGGSound contains around 200k video clips belonging to a total number of 309 classes. Different from other audiovisual datasets like AudioSet (Gemmeke et al., 2017), VGGSound ensures the source of the sound is visually present inside the same video. Thus we consider this dataset as an excellent test bed for our proposed method. We randomly select 154 base classes for training and leave the rest 155 classes for novel classes evaluation.
VGGSound audio spectrogram. We use the pre-processing practice of audio spectrogram transformer (AST) (Gong et al., 2021) to convert wavforms to spectrogram images. Each raw audio signal is re-sampled to 16kHz and converted to mono channel. We then calculate the log Mel spectrogram with 128 frequency bins. The processing Hamming window is 25ms with a hop length set to 10ms. For t second audio input, the generated 2D spectrogram would have the shape of 128 × 100t. We normalize the spectrogram by subtracting its mean value and dividing its standard deviation.
4.2 IMPLEMENTATION
Data augmentation and tokenization. For each video, we first randomly sample 16 frames with a stride of 4 from the whole video sequence. We then apply the standard image augmentation used on ImageNet (He et al., 2016; 2019) with the same augmentation parameters across all frames to keep temporal consistency (Qian et al., 2021). For optical flow, we follow the practice of Xie et al. (2018) and Han et al. (2020a;b) by directly treating it as images and apply the same augmentation with the video. The augmented output tensors have the shape of (16, 224, 224, 3) from both modalities which can be directly fed into CLIP’s ViT encoder. For audio, we apply specialized augmentations designed for spectrogram following Gong et al. (2021) and Nagrani et al. (2021). As the videos in VGGSound are all 10-second long, the generated spectrogram has a shape of (128, 100 × 10). We first conduct random cropping of (128, 800), sampling all frequency bands with a time duration of 8 seconds. SpecAugment (Park et al., 2019) is applied subsequently with a time masking range of 192 frames and frequency masking of 48 bins. Finally, to accommodate this single channel output with the pre-trained tokenization layer, we make two necessary changes as in Gong et al. (2021): 1) expanding the spectrogram to three duplicated channels, 2) bilinearly interpolating the original positional encoding for spectrogram images with a different spatial resolution.
Network architecture. We adopt CLIP’s ViT encoder for video, flow, and audio and the transformer encoder for text. We use 2 transformer layers for temporal fusion (L = 2) and 1 transformer layer for cross-attention, each layer has an embedding dimension of 512 and 8 attention heads. For cross-attention, query and key-value inputs use separate layer normalization.
Training hyper-parameters. We adopt the same hyper-parameters for experiments on Kinetics700 and VGGSound except for training epochs. We use a batch size of 1024 on 128 Cloud TPUv3 cores, AdamW (Loshchilov & Hutter, 2017) optimizer with a weight decay of 0.05 and an initial learning rate of 1e-4 followed by half-cosine decay (He et al., 2019). We set the weight in Eq. 9 as α = 0.5. We train 100 epochs on Kinetics-700 and 20 epochs on VGGSound since we observe an overfit using audio modality when trained longer.
Inference hyper-parameters. For video and flow, we use 4×3 views following Arnab et al. (2021) and Liu et al. (2022), where a video is uniformly sampled into 4 clips temporally, and 3 spatial crops for each clip. For audio, we use 12 temporal views without spatial cropping. The final score is averaged over 12 views. For novel classes, we set the weight β in Eq. 12 to 0.25.
4.3 MULTIMODAL OPEN-VOCABULARY VIDEO CLASSIFICATION
We evaluate MOV on Kinetics-700 to utilize modalities of video, optical flow, and text, and on VGGSound to explore the combination of video, audio and text.
Comparison baselines. We evaluate four baselines: 1) CLIP (Radford et al., 2021), which directly encodes the video and class names into embeddings with pre-trained encoders. The final prediction is given by comparing similarity scores between video and text embeddings; 2) CoOp (Zhou et al., 2021), which learns continuous text prompt embeddings instead of manually selected templates for better adaptation to downstream tasks; 3) CLIP-Adapter (Gao et al., 2021), which attaches adapter heads to both video and text encoder; 4) VATT (Akbari et al., 2021), which is a state-of-the-art multimodal video pre-trainning method and can do zero-shot inference for video classification. We use the same datasets, backbone and hyper-parameters as ours introduced in Sec. 4.2 to train (CLIP and VATT do not require training) and evaluate all methods.
Results. Tab. 1 shows results on Kinetics-700. Both CoOp and CLIP-Adapter achieve better performance than CLIP on base class prediction. While for novel classes, we observe a large accuracy drop compared with CLIP. The degraded performance in harmonic mean of these two methods indicates their loss of the generalization ability on novel classes outweigh their improvement on base classes.
Our method outperforms CLIP-Adapter by 8.8% on base classes, demonstrating the effectiveness of leveraging multimodal information. On novel classes, we observe an improvement of 1.4% over CLIP, indicating that bringing in flow modality improves the generalization of the video model.
We observe similar trends on VGGSound in Tab. 2. CoOp and CLIP-Adapter improve base classes but fail to generalize to novel classes, resulting in a lower harmonic mean of accuracy compared with CLIP. MOV, when fused with rich audio information, obtains a performance gain of 2.7% over CLIP on novel classes. We also conduct an additional study of generalized open-vocabulary prediction in Appendix C, where the information of whether a class is from base or novel is not known.
Backbone scaling. It is also important to analyze the scalability of MOV with stronger backbones. We experiment with the largest ViT-L/14 model released by CLIP as the vision encoder and a text encoder with embedding dimension increased to 768 and attention heads increased to 12. ViT-L/14 contains 3× more parameters than ViT-B/16 and we observe around 8% improvement on direct CLIP zero-shot evaluation on Kinetics-700 and 5% improvement on VGGSound, as indicated in the first 2 rows of Tab. 3. MOV is able to preserve the performance gain brought by using stronger CLIP models (last 2 rows of Tab. 3). Despite the significantly stronger CLIP baseline, MOV still improves 20.5% and 1.6% on Kinetics-700, and 19.3% and 2.0% on VGGSound, when comparing row 2 and row 4 of Tab. 3. The scaling performance shows that MOV has a great potential to be incorporated into recent giant vision and language models (Yuan et al., 2021; Yu et al., 2022).
4.4 CROSS-DATASET TRANSFER
Pre-training an open-vocabulary or zero-shot video classification model on large datasets like Kinetics (Carreira et al., 2019), ImageNet (Deng et al., 2009) or Sports-1M (Karpathy et al., 2014) and evaluating on UCF101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011) is the most
GA (Mishra et al., 2018) C3D + W2V S1M 17.3±1.1 / - 19.3±2.1 / - TARN (Bishay et al., 2019) C3D + W2V S1M 19.0±2.3 / - 19.5±4.2 / - CWEGAN (Mandal et al., 2019) I3D + W2V IN, K400 26.9±2.8 / - 30.2±2.7 / - TS-GCN (Gao et al., 2019) GLNet + W2V IN-shuffle 34.2±3.1 / - 23.2±3.0 / - PS-GNN (Gao et al., 2020) GLNet + W2V IN-shuffle 36.1±4.8 / - 25.9±4.1 / - E2E (Brattoli et al., 2020) R(2+1)D + W2V K700 48.0 / 37.6 32.7 / 26.9 DASZL (Kim et al., 2021) TSM + Attributes IN, K400 48.9±5.8 / - - / - ER (Chen & Huang, 2021) TSM + BERT IN, K400 51.8±2.9 / - 35.3±4.6 / - ResT (Lin et al., 2022) RN101 + W2V K700 58.7±3.3 / 40.6 41.1±3.7 / 34.4 MIL-NCE (Miech et al., 2020) S3D + W2V HT100M - / 29.3 - / 10.4 VideoCLIP (Xu et al., 2021) S3D + TSF HT100M - / 22.5 - / 11.3 VATT (Akbari et al., 2021) ViT + TSF HT100M - / 18.4 - / 13.2 CLIP (Radford et al., 2021) ViT-B/16 + TSF WIT 79.9±3.8 / 73.0 54.0±4.1 / 46.1 ActionCLIP (Wang et al., 2021) ViT-B/16 + TSF WIT+ - / 69.5 - / 50.5 X-CLIP (Ni et al., 2022) ViT-B/16 + TSF WIT+ - / 72.0 - / 44.6 MOV (Ours) ViT-B/16 + TSF WIT+ 82.6±4.1 / 76.2 60.8±2.8 / 52.1 MOV (Ours) ViT-L/14 + TSF WIT+ 87.1±3.2 / 80.9 64.7±3.2 / 57.8 † vision encoder: C3D (Tran et al., 2015), I3D (Carreira & Zisserman, 2017), GLNet (Szegedy et al., 2015), R(2+1)D (Tran et al., 2018), TSM (Lin et al., 2019), RN101 (He et al., 2016), S3D (Xie et al., 2018), ViT (Dosovitskiy et al., 2021). ‡ text encoder: W2V (Mikolov et al., 2013), BERT (Devlin et al., 2019), TSF (Vaswani et al., 2017). § pre-train data: S1M (Karpathy et al., 2014), IN (Deng et al., 2009), K400 (Kay et al., 2017), IN-shuffle (Mettes et al., 2016), K700 (Carreira et al., 2019), HT100M (Radford et al., 2021), WIT (Radford et al., 2021), WIT+ has additional training on Kinetics.
common paradigm in the literature. Two settings are used for performance evaluation (Brattoli et al., 2020). The first is randomly choosing half of the classes in the test set and evaluate on the selected subset. To alleviate fluctuations caused by randomness, the evaluation is conducted independently for 10 times and we report the mean accuracy with standard deviation of all trials. We donate this setting as UCF∗ and HMDB∗ in Tab. 4. The second evaluation setting is directly evaluating on the whole dataset, which is suitable for methods pre-trained purely on other datasets (Brattoli et al., 2020; Wang et al., 2021; Lin et al., 2022). We train MOV only using 400 base classes subsampled from Kinectis-700, with video, flow and text. For evaluating on UCF and HMDB, we also use the same three modalities. The flow processing follows the same procedure described in Sec. 4.1.
We present a comprehensive comparison in Tab. 4. As in Lin et al. (2022), we list the vision and text encoder and pre-train data used. We compare with three types of state-of-the-art methods: 1) zeroshot video classification approaches (top part), 2) video and language pre-training methods (Miech et al., 2020; Xu et al., 2021; Akbari et al., 2021) (middle part), 3) CLIP adaptation methods (Wang et al., 2021; Ni et al., 2022) (bottom part). Compared to these methods, we find utilizing pretrained vision and language models like CLIP yield much stronger performance. MOV achieves performance gains over CLIP with around 3% on UCF101 and around 6% on HMDB51. Compared with recently proposed adaptation methods like ActionCLIP and X-CLIP, MOV performs 4.2% to 6.7% better on UCF101 and 1.6% to 7.5% better on HMDB51.
4.5 ABLATION STUDY
Multimodal fusion for base classes. As demonstrated in Fig. 1 and Fig. 2, the asymmetrical crossattention mechanism is proposed to improve the generalization to novel classes. Here we justify cross-attention also has the advantage for base classes. Tab. 5 shows, for Kinetics-700, simply using the optical flow as input obtains 54.2% on base classes. When using score fusion, compared with video modality, we observe identical performance on base classes. Equipped with the proposed multimodal cross-attention fusion mechanism, we obtain 2.6% improvement on base classes. For VGGSound, the performance of audio only is quite close to video only, and the score fusion facilitates base classes with a significant 6.5% improvement. Our cross-attention mechanism is able to further improve upon this strong baseline by 0.7%.
Fine-tuning. We fine-tune different layers of the encoder for flow and audio modality and show results in Tab. 6. As mentioned in Sec. 3, we use the same ViT-B/16 encoder and the same initialization weight for video, flow and audio. We iterate choices of fine-tuning the last 1, 3, 6, 9, and all 12 layers and find consistent performance gains with the increasing number of trainable layers on both modalities. Thus, we adopt the setting of fine-tuning all layers for flow and audio modality.
Per-class accuracy analysis. We analyze and interpret class-wise performance differences between MOV and CLIP baseline, which only uses video and text. As illustrated in Fig. 3a, we observe strong gains on classes that require motion understanding, e.g. yawning and long jump. While we also find decreased performance on classes with subtle or ambiguous motions, e.g. look in mirror and geocaching. In Fig. 3b, we observe audio modality can significantly help disambiguate classes sharing similar visual contents, e.g. people nose blowing and people laughing. For classes being difficult in the audio domain, e.g. sloshing water and wind noise, we observe decreased performances.
5 CONCLUSION
We propose a multimodal open-vocabulary video classification method named MOV via adopting pre-trained vision and language models. Motivated by observing drastic performance differences when using video, audio, and optical flow to generalize from base to novel classes, we design a novel asymmetrical cross-modal fusion mechanism to aggregate multimodal information. Extensive experiments on Kinetics, VGGSound, UCF, and HMDB benchmarks demonstrate the effectiveness of our method and the potential of scaling to giant vision and language models.
6 REPRODUCIBILITY STATEMENT
We plan to release our code, dataset splits, and models to facilitate reproducibility. We provided details of our model, data, implementation and experiments in Sec. 3, Sec. 4 and Appendix B. The CLIP model (Radford et al., 2021) and all datasets used in this work (Carreira et al., 2019; Chen et al., 2020; Soomro et al., 2012; Kuehne et al., 2011) are publicly available.
7 ETHICS STATEMENT
The proposed method shows better classification performance on multimodal videos with novel classes on Kinetics, VGGSound, UCF, and HMDB datasets, indicating its potential for real world applications. Our method is built upon vision and language models pre-trained on large-scale data from the internet, which may contain deficiencies and biases. Our models are used only for the purpose of evaluating research ideas. More rigorous studies for bias, fairness, etc., are required before using our models for any other purposes.
A COMPARISON WITH MODALITY-SPECIFIC PRE-TRAINED NETWORKS
As we haved metioned in the introduction, instead of using modality-specfic pre-trained encoder networks or methods (Wang et al., 2016; Hershey et al., 2017), we choose a more straightforward path by directly utilizing the pre-trained vision encoder from VLMs with minimal modifications to deal with optical flow and audio spectrogram. Here we list the experimental results using the audio of VGGSound in Tab. 7 to show the effectiveness of our design choice. All methods only use the audio training data and evaluate on audios. Our MOV based on CLIP’s vision encoder shows competitive performance compared to other audio specific encoders.
B TEMPERATURE TUNING
As described in Sec. 3.4, in addition to fused flow and audio features of {fm,am}, we also incorporate the video feature v extracted from the frozen video backbone to enhance the performance of generalization to novel classes. We denote the probability distribution followed by {pf (j)|qj=1}, {pa(j)|qj=1} and {pv(j)| q j=1} as Df , Da and Dv . In our experiments we find the curve of Dv tends to be much flatter (or have higher information entropy) than Df and Da when the temperatures τv , τf and τa are all set to the CLIP’s default value of 0.01. Neglecting this difference and combining the scores as in Eq. 12 would lead to poor performance. We address this problem by lowering τv so that the distribution of Dv would be more similar to Df and Da (or having similar information entropy). As shown in Tab. 8, adjusting τv to 0.003 while keeping τf and τa as 0.01 greatly improves the performance by 20.1% on Kinetics-700 and 15.8% on VGGSound.
C DISCUSSION ON GENERALIZED OPEN-VOCABULARY PREDICTION
Our model adopt different inference paths for base and novel classes. The evaluation setting of dividing classes into base and novel is a very common practice in existing open-vocabulary literature (Zhou et al., 2021; 2022; Gu et al., 2022; Ghiasi et al., 2021). We follow this established open-vocabulary setting to conduct experiments and evaluate our method.
If label category information isn’t given, evaluating purely on unseen classes is the classic setting of zero-shot evaluation (Xian et al., 2018). We benchmark our method in this zero-shot setting in Sec. 4.4 Cross-Dataset Transfer. Our method achieves state-of-the-art performance on commonly used UCF and HMDB zero-shot video classification benchmarks.
Here we consider another setting of generalized open-vocabulary prediction where we train our model on base classes but the model doesn’t know whether a class is from base or not during inference. A simple solution is to treat all classes as novel (i.e., use only the “Novel Class Prediction” path illustrated in Fig. 2). We conduct such experiment on Kinetics-700 by training MOV on 400 base classes and evaluating on all 700 classes by treating all of them as novel classes. In this scenario, we observe detrimental performances for both our method MOV and the CLIP baseline. Since the number of classes is 2× more (300 to 700), we consider it a reasonable result. MOV improves upon CLIP in both original (+1.4%) and generalized (+0.7%) open-vocabulary settings for predicting novel classes. | 1. What is the main contribution of the paper, and how does it address the open-vocabulary video classification problem?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to zero-shot learning methods?
3. Do you have any concerns regarding the processing of optical flow data, including the addition of an all-zero channel?
4. Can you clarify how the corresponding N optical flow images are obtained from N RGB frames, and what Temp. Fusion represents in Figure 2?
5. How do the authors handle calibration issues with separate classifiers for base and novel classes, and how do they ensure comparable classification scores?
6. Why does Equation (9) only use optical flow or audio, and not combine them together? Should there be three items for RGB, optical flow, and audio in the loss?
7. Why does Equation (11) ignore audio and only use optical flow, and why not use the same setting as Equation (9)? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposed to leverage pre-trained vision and language models for open-vocabulary video classification. The proposed idea is fairly straightforward. Different modalities (RGB, optical flow, audio) are converted into images, then fed into the pretrained CLIP model. Different modalities are combined with cross attention. The paper repurpose the existing video datasets for open vocabulary, and define new settings that are customized to the problem. The paper is able to outperform a few recent methods based on their designed setting.
Strengths And Weaknesses
I summarized my questions about the paper in the following:
1, My biggest concern is that the paper uses pre-trained vision and language models to tackle the open vocabulary problem. I think this approach is fundamentally flawed. Pre-trained vision and language models (CLIP) are known to have amazing zero-shot learning capabilities, since they are trained with a huge amount of text supervision. Why bother to apply the CLIP model on the open-vocabulary problem? The open-vocabulary problem is just a special case of zero-shot learning, and the CLIP model can directly do zero-shot learning on the novel classes from the open vocabulary. In Table 1, the paper reported 51.2% on base classes and 56.7 on novel classes for Kinetics-700. However, the original CLIP paper reported 60.3% or 61.3% for zero-shot learning on Kinetics-700. The paper seems to produce worse results than zero-shot learning, despite using additional modalities, such as optical flow and audio.
2, Section 4.1 describes how to process optical flow. Why add a third all-zero channel? Just to leverage the pre-trained CLIP model? Will this introduce bias to the data?
3, In Section 3.1, how to get the corresponding N optical flow images from N RGB frames? Should the number of optical flow images be N-1?
4, What is Temp. Fusion in Figure 2? Average pooling?
5, For section 3.3 and section 3.4, do we need to calibrate the classification scores for base classes and novel classes, since we have separate classifiers for base and novel classes? How to make sure that their classification scores are comparable, when we consider the joint classification problem?
6, In Eq. (9), why do we only use optical flow or audio? Why not combine them together? Should we have three items for RGB, optical flow and audio in the loss?
7, In Eq. (11), do we ignore audio and only use optical flow? Why not using the same setting as Eq. (9)?
Clarity, Quality, Novelty And Reproducibility
See above. |
ICLR | Title
Multimodal Open-Vocabulary Video Classification via Vision and Language Models
Abstract
Utilizing vision and language models (VLMs) pre-trained on internet-scale imagetext pairs is becoming a promising paradigm for open-vocabulary vision tasks. This work conducts an extensive study for multimodal open-vocabulary video classification via pre-trained VLMs by leveraging motion and audio that naturally exist in the video. We design an asymmetrical cross-modal fusion mechanism to aggregate multimodal information differently for video and optical flow / audio. Experiments on Kinetics and VGGSound show that introducing more modalities significantly improves the accuracy on seen classes, while generalizing better to unseen classes over existing approaches. Despite its simplicity, our method achieves state-of-the-art results on UCF and HMDB zero-shot video action recognition benchmarks, significantly outperforming traditional zero-shot techniques, video-text pre-training methods and recent VLM-based approaches. Code and models will be released.
1 INTRODUCTION
Building open-vocabulary models capable of predicting beyond a fixed set of training classes is of crucial importance in computer vision. Recently, vision and language models (VLMs) pre-trained on internet-scale image-text pairs, e.g., CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021), demonstrate impressive transferability on a wide range of vision tasks. Utilizing strong pre-trained VLMs is becoming a promising paradigm for open-vocabulary vision tasks including object detection (Gu et al., 2022) and image segmentation (Ghiasi et al., 2021; Li et al., 2022a).
In this work, we focus on the novel challenging task of multimodal open-vocabulary video classification via pre-trained VLMs. We set up open-vocabulary video benchmarks by utilizing two existing large-scale multimodal video datasets: Kinetics-700 (Carreira et al., 2019) and VGGSound (Chen et al., 2020). Specifically, we constructed two sets of classes: base (seen) and novel (unseen). For base classes, we have both training and testing videos, aiming at helping the pre-trained VLMs adapt to the video domain. While for novel classes, we only have testing videos, mimicking the real-world challenge of open-vocabulary video classification. To the best of our knowledge, we are the first to study how to leverage pre-trained VLMs for multimodal open-vocabulary video classification.
We start with directly fine-tuning the vision encoder of CLIP (Radford et al., 2021) with the language encoder fixed, using the training videos from base classes. As shown in Fig. 1 (a-d), although there is a decent performance gain for base classes, the accuracy for novel classes decreases significantly. This observation corroborates with Zhou et al. (2022) on adapting pre-trained VLMs.
On the other hand, despite rich multimodal contents in internet videos, signals such as audio and motion are less explored in recent open-vocabulary models. This is in stark contrast to the human perception system that heavily relies on multimodal signals (Smith & Gasser, 2005). Can we leverage multimodal information to improve open-vocabulary models?
Instead of using specially designed modality-specific encoders (Wang et al., 2016; Hershey et al., 2017), we choose a more straightforward path by directly utilizing the pre-trained vision encoder from VLMs with minimal modifications to deal with optical flow and audio spectrogram.
We then conduct the same experiments by fine-tuning CLIP’s vision encoder but instead using flow or audio as the input. As shown in Fig. 1 (e-h), surprisingly, we find that fine-tuning on base classes
0 25 50 75 100
50
55
60
65 70 A cc ur ac y (% )
(a) BASE
0 25 50 75 100 26
28
30
32
34
(b) NOVEL
0 25 50 75 100
50
52
54
56
58
60
(c) BASE
0 25 50 75 100
12
14
16
18
20
22
24
26
28
30 (d) NOVEL
0 25 50 75 100 0
6
12
18
24
30
36
42
48
54
(e) BASE
0 25 50 75 100
2
4
6
8
10
12
14
16
(f) NOVEL
0 25 50 75 100 52
53
54
55
56
57
58
59
(g) BASE
0 25 50 75 100
8
10
12
14
16
(h) NOVEL
Training Epochs
Kinetics VIDEO VGGSound VIDEO VGGSound AUDIOKinetics FLOW
Figure 1: Fine-tuning pre-trained CLIP with video, flow and audio modalities. For all three modalities, fine-tuning on labeled base classes leads to significant accuracy improvement (a, c, e, g). However, when evaluating the same model on novel classes, the video modality shows decreasing performance (b, d), while the performance for both flow and audio modality is improving (f, h).
is able to also improve the performance on novel classes. This suggests that we may use flow or audio modality to improve the base to novel generalization of video modality.
In light of these observations, we propose MOV, a simple yet effective method for Multimodal Open-Vocabulary video classification. Fig. 2 shows an overview of our method. In MOV, we design a novel asymmetrical cross-modal fusion mechanism using cross-attention to leverage complementary multimodal information differently for video and optical flow / audio. The core idea is to exploit the strong transferability in the pre-trained vision encoder, while allowing greater flexibility in finetuning flow and audio encoders. MOV is trained using multimodal inputs from base classes and is able to predict both base and novel classes during inference.
We carry out extensive experiments and ablation studies on Kinetics-700 (Carreira et al., 2019) and VGGSound (Chen et al., 2020). MOV shows clear improvements over CLIP (Radford et al., 2021), recent CLIP adaptation techniques (Zhou et al., 2021; Gao et al., 2021), as well as videotext pre-training methods (Akbari et al., 2021) on both base and novel classes. MOV also achieves state-of-the-art results on UCF and HMDB zero-shot video action recognition benchmarks, significantly outperforming traditional zero-shot methods, state-of-the-art VLM adaption techniques, and a variety of video-text pre-training approaches. Furthermore, MOV is scalable with much stronger backbones, indicating its potential to be incorporated with large vision and language models.
2 RELATED WORK
Vision and language models. Learning a joint embedding space from vision and language modalities has been extensively studied during the past decade. Early works usually first encode two modalities separately, using hand-crafted descriptors (Elhoseiny et al., 2013) or deep networks (Lei Ba et al., 2015) for images, and skip-gram text models for languages (Frome et al., 2013). The crossmodality alignment is then achieved by metric learning (Frome et al., 2013) or language concepts (Li et al., 2017). Recently, learning vision and language modalities jointly through contrastive learning (Hadsell et al., 2006; Oord et al., 2018) becomes a promising direction. Impressive performance has been achieved by utilizing stronger encoders for vision (Dosovitskiy et al., 2021), language (Vaswani et al., 2017) and web-scale pre-training data (Hinton et al., 2015; Radford et al., 2021). CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) are two representative approaches which show strong zero-shot 1 performance on various downstream tasks. Despite this strong baseline, adapting pre-trained VLMs to specific vision domains in a more effective way remains critical and is being actively studied. Examples abound, including image classification (Zhou et al., 2021; 2022; Gao et al., 2021), object detection (Gu et al., 2022; Zhong et al., 2022; Kamath et al., 2021; Li et al., 2022b), image segmentation (Ghiasi et al., 2021; Li et al., 2022a), audio classification (Guzhov et al., 2022) and video action recognition (Wang et al., 2021; Ju et al., 2021; Ni et al., 2022). Our method extends the existing research by adapting pre-trained VLMs to multimodal video and investigating the impact of additional input modalities like flow and audio.
1We use the term “zero-shot” when we need to align with settings described in some existing works. Otherwise, we would use “open-vocabulary” which we believe is a more precise term.
Open-vocabulary video classification. Zero-shot or open-vocabulary video action recognition is a representative task in this domain. Similar to early works of vision and language learning, the video input and labeled texts are encoded with modality-specific pre-trained models such as S3D (Xie et al., 2018), R(2+1)D (Tran et al., 2018) for video, and Word2Vec (Mikolov et al., 2013) for text. Since the generated video and text embeddings are not aligned, various methods have been proposed to bridge the gap by mapping two modalities into a joint embedding space (Wang & Chen, 2017; Chen & Huang, 2021; Gao et al., 2019; Wu et al., 2016; Xu et al., 2016; Zhu et al., 2018), mapping vision modality to language space (Bishay et al., 2019; Brattoli et al., 2020; Hahn et al., 2019; Xu et al., 2017) or mapping language modality to vision space (Mandal et al., 2019; Zhang & Peng, 2018). These joint embedding mapping methods are further extended to audiovisual classification (Mercea et al., 2022; Mazumder et al., 2021; Parida et al., 2020). Our approach shows that we can significantly improve the performance of open-vocabulary video classification by leveraging strong pre-trained VLMs and other modalities like flow and audio. To our knowledge, this has not been done by prior works in this field.
Mutlimodal fusion for video. Videos are a natural source of multimodal data including motion and audio. Two-stream networks is used to model video and optical flow simultaneously for action recognition (Simonyan & Zisserman, 2014; Wang et al., 2016; Feichtenhofer et al., 2016; 2017). Late fusion is adopted (Simonyan & Zisserman, 2014; Wang et al., 2016) and then thoroughly studied (Feichtenhofer et al., 2016; 2017) on how to better perform spatio-temporal fusion from two modalities. As in the domain of audiovisual fusion, early methods (Chen & Rao, 1998) usually adopt straightforward score fusion or stacking input data for early fusion. Later research (Kazakos et al., 2019; Xiao et al., 2020; Fayek & Kumar, 2020; Nagrani et al., 2021; Chen & Ho, 2022; Chen et al., 2021; Zhao et al., 2022) focus on developing better mid or late fusion strategies to improve the final performance. Different from existing works focusing on a fixed set of classes, we use multimodal fusion to help open-vocabulary video models generalize better to novel classes.
3 METHOD
An overview of our proposed method is shown in Fig. 2. We next describe each component.
3.1 MODALITY-SPECIFIC ENCODING
Given a pre-trained vision and language model, e.g., CLIP (Radford et al., 2021), we denote its vision encoder as h(·|θh) and its language encoder as g(·|θg). For a multimodal video input, we sample N RGB frames V and calculate the corresponding optical flow images F , resulting in V = {v1, v2, . . . , vN} and F = {f1, f2, . . . , fN}. We also generate the spectrogram image A from the raw audio waveform. More implementation details can be found in Sec. 4. We use the same encoder architecture h(·|·) to extract feature representations for video, flow and audio modalities, denoted as hv(·|θv), hf (·|θf ), and ha(·|θa) respectively. Model parameters θv , θf and θa are all initialized with the weight θh from the pre-trained VLM’s vision encoder. Apart from being simple and easy to implement, this design has two additional advantages: 1) the performance of adopting the pre-trained VLM’s vision encoder to other modalities is competitive against in-domain methods (a detailed study in Appendix A); 2) the vision encoder is trained to align with the language encoder, potentially helping the generalization from base to novel classes. We encode each modality separately as:
v = hv(V |θv), f = hf (F |θf ), a = ha(A|θa), (1)
where v and f are features fromN frames, and a is the representation of a single spectrogram image.
To better aggregate the temporal features of video and flow modalities, we attach temporal fusion networks φv(·) and φf (·), consisting of L transformer layers each, on top of hv(·|θv) and hf (·|θf ). We denote the input of the l-th transformer layer as zl and the input z0 can be either v or f . Then the forward pass of the l-th layer in φv(·) and φf (·) can be formulated as:
yl = MSA(LN(zl)) + zl, (2)
zl+1 = MLP(LN(yl)) + yl, (3)
where LN stands for layer normalization, MSA represents multi-head self-attention, and MLP means multi-layer perceptron. For audio feature a, we simply attach an MLP module upon the backbone. We obtain the temporally fused features as:
vt = φv(v), ft = φf (f), at = MLP(a). (4)
Finally, for the text modality, suppose we have p base classes with labels. We fill each of the class names into 28 video classification prompts provided by CLIP (Radford et al., 2021) like “a video of a person doing {class name}” and then encode the sentence using the pre-trained language encoder g(·|θg) from VLM. The embedding of each class is averaged over all templates denoted as {Bi}pi=1.
3.2 ASYMMETRICAL MULTIMODAL FUSION
We adopt an asymmetrical cross-attention mechanism to fuse mutlimodal features. Without loss of generality, as shown in Fig. 2, our method described here is for fusing one of {flow, audio}modality with video modality. The algorithm can be easily extended to fusing video with more modalities.
For the video modality, we extract the information from other modalities to enhance the performance of video feature. Thus we use vt as the input for attention query, and ft or at from the other modality as the input for attention key and value. The fused multimodal video feature vm can be written as:
vt = MCA(LN(vt),LN(xt)) + vt, xt ∈ {ft,at}, (5) vm = AvgPool ( MLP(LN(vt)) + vt ) , (6)
where MCA denotes multi-head cross-attention, AvgPool denotes temporal average pooling.
For the audio and flow modalities, we adopt an asymmetrical design aiming at incorporating the information from video modality to enhance the generalization ability of the feature to novel classes. Since the video temporal fusion network φv(·) for generating the video feature vt are trained on base classes, vt losses the generalization ability to novel classes (shown in Fig. 1). Therefore we choose to directly use the frozen vision encoder’s output v instead of vt for better generalization to novel classes. We obtain the fused multimodal flow and audio feature fm and am as:
ft = MCA(LN(ft),LN(v)) + ft, at = MCA(LN(at),LN(v)) + at, (7) fm = AvgPool ( MLP(LN(ft)) + ft ) , am = AvgPool ( MLP(LN(at)) + at ) . (8)
3.3 TRAINING AND INFERENCE ON BASE CLASSES
During training, each input multimodal video has a corresponding label y belonging to the base classes. We would optimize different modalities simultaneously via maximizing the video-text, flow-text and audio-text similarity. The training loss function can be formulated as:
L = α(− log exp(sim(vm,By)/τ)∑p i=1 exp(sim(vm,Bi)/τ) ) + (1− α)(− log exp(sim(xm,By)/τ)∑p i=1 exp(sim(xm,Bi)/τ) ), (9)
where xm ∈ {fm,am} is the final fused flow or audio feature, α is the weight for balancing two loss terms, sim(·, ·) is the cosine similarity, τ is a pre-defined temperature parameter. During training, we freeze the video encoder and the text encoder to retain their strong generalization to novel classes and save computation, while for other two modalities flow and audio, we fine-tune the encoder endto-end. An ablation study on fine-tuning different number of layers can be found in Tab. 6.
For inference on base classes, we compute the probability belonging to the j-th class by:
P (j) = exp(sim(vm,Bj)/τ)∑p i=1 exp(sim(vm,Bi)/τ) , j ∈ {1, 2, . . . , p}. (10)
3.4 GENERALIZATION TO NOVEL CLASSES
Similar to base classes, we obtain the text embeddings for novel classes as {Ni}qi=1, where q is the number of novel classes. In addition to fused features fm or am, we also incorporate the video feature v extracted from the frozen video encoder, followed by a temporal average pooling. Similar to Eq. 10, we compute the probability predictions as (here we only show flow modality for simplicity): Pf (j) = exp(sim(fm,Nj)/τf )∑q i=1 exp(sim(fm,Ni)/τf ) , Pv(j) = exp(sim(v,Nj)/(τv)∑q i=1 exp(sim(v,Ni)/(τv)) , j ∈ {1, 2, . . . , q}. (11) We denote the probability distribution followed by {pf (j)|qj=1} and {pv(j)| q j=1} as Df and Dv . In our experiments we find the curve of Dv tends to be much flatter (or have higher information entropy) than Df when the temperatures τv and τf are both set to the CLIP’s default value of 0.01, resulting in poor performance. We find simply lowering τv to 0.003 while keeping τf and τa as 0.01 solves this issue. A detailed ablation study about the temperature can be found in Appendix B.
The final probability predictions for novel classes are calculated by a weighted sum:
P (j) = βPf (j) + (1− β)Pv(j). (12)
4 EXPERIMENTS
4.1 DATASETS
We describe the details of dataset splits for benchmarking multimodal open-vocabulary video classification and preparing flow and audio modalities.
Kinetics-700 (Carreira et al., 2019) splits. Kinetics-700 contains around 650k video clips annotated with 700 human action classes. Apart from the visual appearance, motion plays an important role for distinguishing different action classes. For dataset split, we randomly select 400 classes as base classes and the testing videos of the rest 300 classes are used for novel class evaluation.
Kinetics-700 optical flow. We follow a standard procedure (Xie et al., 2018; Han et al., 2020a;b) to use the TV-L1 algorithm (Zach et al., 2007) to extract optical flow in an unsupervised manner. To accommodate for pre-trained vision encoders, we first truncate the vertical and horizontal motion values to [−20, 20], then append a third all-zero channel. Finally we do a shift and scale transformation to map [−20, 20] to [0, 255]. VGGSound (Chen et al., 2020) splits. VGGSound contains around 200k video clips belonging to a total number of 309 classes. Different from other audiovisual datasets like AudioSet (Gemmeke et al., 2017), VGGSound ensures the source of the sound is visually present inside the same video. Thus we consider this dataset as an excellent test bed for our proposed method. We randomly select 154 base classes for training and leave the rest 155 classes for novel classes evaluation.
VGGSound audio spectrogram. We use the pre-processing practice of audio spectrogram transformer (AST) (Gong et al., 2021) to convert wavforms to spectrogram images. Each raw audio signal is re-sampled to 16kHz and converted to mono channel. We then calculate the log Mel spectrogram with 128 frequency bins. The processing Hamming window is 25ms with a hop length set to 10ms. For t second audio input, the generated 2D spectrogram would have the shape of 128 × 100t. We normalize the spectrogram by subtracting its mean value and dividing its standard deviation.
4.2 IMPLEMENTATION
Data augmentation and tokenization. For each video, we first randomly sample 16 frames with a stride of 4 from the whole video sequence. We then apply the standard image augmentation used on ImageNet (He et al., 2016; 2019) with the same augmentation parameters across all frames to keep temporal consistency (Qian et al., 2021). For optical flow, we follow the practice of Xie et al. (2018) and Han et al. (2020a;b) by directly treating it as images and apply the same augmentation with the video. The augmented output tensors have the shape of (16, 224, 224, 3) from both modalities which can be directly fed into CLIP’s ViT encoder. For audio, we apply specialized augmentations designed for spectrogram following Gong et al. (2021) and Nagrani et al. (2021). As the videos in VGGSound are all 10-second long, the generated spectrogram has a shape of (128, 100 × 10). We first conduct random cropping of (128, 800), sampling all frequency bands with a time duration of 8 seconds. SpecAugment (Park et al., 2019) is applied subsequently with a time masking range of 192 frames and frequency masking of 48 bins. Finally, to accommodate this single channel output with the pre-trained tokenization layer, we make two necessary changes as in Gong et al. (2021): 1) expanding the spectrogram to three duplicated channels, 2) bilinearly interpolating the original positional encoding for spectrogram images with a different spatial resolution.
Network architecture. We adopt CLIP’s ViT encoder for video, flow, and audio and the transformer encoder for text. We use 2 transformer layers for temporal fusion (L = 2) and 1 transformer layer for cross-attention, each layer has an embedding dimension of 512 and 8 attention heads. For cross-attention, query and key-value inputs use separate layer normalization.
Training hyper-parameters. We adopt the same hyper-parameters for experiments on Kinetics700 and VGGSound except for training epochs. We use a batch size of 1024 on 128 Cloud TPUv3 cores, AdamW (Loshchilov & Hutter, 2017) optimizer with a weight decay of 0.05 and an initial learning rate of 1e-4 followed by half-cosine decay (He et al., 2019). We set the weight in Eq. 9 as α = 0.5. We train 100 epochs on Kinetics-700 and 20 epochs on VGGSound since we observe an overfit using audio modality when trained longer.
Inference hyper-parameters. For video and flow, we use 4×3 views following Arnab et al. (2021) and Liu et al. (2022), where a video is uniformly sampled into 4 clips temporally, and 3 spatial crops for each clip. For audio, we use 12 temporal views without spatial cropping. The final score is averaged over 12 views. For novel classes, we set the weight β in Eq. 12 to 0.25.
4.3 MULTIMODAL OPEN-VOCABULARY VIDEO CLASSIFICATION
We evaluate MOV on Kinetics-700 to utilize modalities of video, optical flow, and text, and on VGGSound to explore the combination of video, audio and text.
Comparison baselines. We evaluate four baselines: 1) CLIP (Radford et al., 2021), which directly encodes the video and class names into embeddings with pre-trained encoders. The final prediction is given by comparing similarity scores between video and text embeddings; 2) CoOp (Zhou et al., 2021), which learns continuous text prompt embeddings instead of manually selected templates for better adaptation to downstream tasks; 3) CLIP-Adapter (Gao et al., 2021), which attaches adapter heads to both video and text encoder; 4) VATT (Akbari et al., 2021), which is a state-of-the-art multimodal video pre-trainning method and can do zero-shot inference for video classification. We use the same datasets, backbone and hyper-parameters as ours introduced in Sec. 4.2 to train (CLIP and VATT do not require training) and evaluate all methods.
Results. Tab. 1 shows results on Kinetics-700. Both CoOp and CLIP-Adapter achieve better performance than CLIP on base class prediction. While for novel classes, we observe a large accuracy drop compared with CLIP. The degraded performance in harmonic mean of these two methods indicates their loss of the generalization ability on novel classes outweigh their improvement on base classes.
Our method outperforms CLIP-Adapter by 8.8% on base classes, demonstrating the effectiveness of leveraging multimodal information. On novel classes, we observe an improvement of 1.4% over CLIP, indicating that bringing in flow modality improves the generalization of the video model.
We observe similar trends on VGGSound in Tab. 2. CoOp and CLIP-Adapter improve base classes but fail to generalize to novel classes, resulting in a lower harmonic mean of accuracy compared with CLIP. MOV, when fused with rich audio information, obtains a performance gain of 2.7% over CLIP on novel classes. We also conduct an additional study of generalized open-vocabulary prediction in Appendix C, where the information of whether a class is from base or novel is not known.
Backbone scaling. It is also important to analyze the scalability of MOV with stronger backbones. We experiment with the largest ViT-L/14 model released by CLIP as the vision encoder and a text encoder with embedding dimension increased to 768 and attention heads increased to 12. ViT-L/14 contains 3× more parameters than ViT-B/16 and we observe around 8% improvement on direct CLIP zero-shot evaluation on Kinetics-700 and 5% improvement on VGGSound, as indicated in the first 2 rows of Tab. 3. MOV is able to preserve the performance gain brought by using stronger CLIP models (last 2 rows of Tab. 3). Despite the significantly stronger CLIP baseline, MOV still improves 20.5% and 1.6% on Kinetics-700, and 19.3% and 2.0% on VGGSound, when comparing row 2 and row 4 of Tab. 3. The scaling performance shows that MOV has a great potential to be incorporated into recent giant vision and language models (Yuan et al., 2021; Yu et al., 2022).
4.4 CROSS-DATASET TRANSFER
Pre-training an open-vocabulary or zero-shot video classification model on large datasets like Kinetics (Carreira et al., 2019), ImageNet (Deng et al., 2009) or Sports-1M (Karpathy et al., 2014) and evaluating on UCF101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011) is the most
GA (Mishra et al., 2018) C3D + W2V S1M 17.3±1.1 / - 19.3±2.1 / - TARN (Bishay et al., 2019) C3D + W2V S1M 19.0±2.3 / - 19.5±4.2 / - CWEGAN (Mandal et al., 2019) I3D + W2V IN, K400 26.9±2.8 / - 30.2±2.7 / - TS-GCN (Gao et al., 2019) GLNet + W2V IN-shuffle 34.2±3.1 / - 23.2±3.0 / - PS-GNN (Gao et al., 2020) GLNet + W2V IN-shuffle 36.1±4.8 / - 25.9±4.1 / - E2E (Brattoli et al., 2020) R(2+1)D + W2V K700 48.0 / 37.6 32.7 / 26.9 DASZL (Kim et al., 2021) TSM + Attributes IN, K400 48.9±5.8 / - - / - ER (Chen & Huang, 2021) TSM + BERT IN, K400 51.8±2.9 / - 35.3±4.6 / - ResT (Lin et al., 2022) RN101 + W2V K700 58.7±3.3 / 40.6 41.1±3.7 / 34.4 MIL-NCE (Miech et al., 2020) S3D + W2V HT100M - / 29.3 - / 10.4 VideoCLIP (Xu et al., 2021) S3D + TSF HT100M - / 22.5 - / 11.3 VATT (Akbari et al., 2021) ViT + TSF HT100M - / 18.4 - / 13.2 CLIP (Radford et al., 2021) ViT-B/16 + TSF WIT 79.9±3.8 / 73.0 54.0±4.1 / 46.1 ActionCLIP (Wang et al., 2021) ViT-B/16 + TSF WIT+ - / 69.5 - / 50.5 X-CLIP (Ni et al., 2022) ViT-B/16 + TSF WIT+ - / 72.0 - / 44.6 MOV (Ours) ViT-B/16 + TSF WIT+ 82.6±4.1 / 76.2 60.8±2.8 / 52.1 MOV (Ours) ViT-L/14 + TSF WIT+ 87.1±3.2 / 80.9 64.7±3.2 / 57.8 † vision encoder: C3D (Tran et al., 2015), I3D (Carreira & Zisserman, 2017), GLNet (Szegedy et al., 2015), R(2+1)D (Tran et al., 2018), TSM (Lin et al., 2019), RN101 (He et al., 2016), S3D (Xie et al., 2018), ViT (Dosovitskiy et al., 2021). ‡ text encoder: W2V (Mikolov et al., 2013), BERT (Devlin et al., 2019), TSF (Vaswani et al., 2017). § pre-train data: S1M (Karpathy et al., 2014), IN (Deng et al., 2009), K400 (Kay et al., 2017), IN-shuffle (Mettes et al., 2016), K700 (Carreira et al., 2019), HT100M (Radford et al., 2021), WIT (Radford et al., 2021), WIT+ has additional training on Kinetics.
common paradigm in the literature. Two settings are used for performance evaluation (Brattoli et al., 2020). The first is randomly choosing half of the classes in the test set and evaluate on the selected subset. To alleviate fluctuations caused by randomness, the evaluation is conducted independently for 10 times and we report the mean accuracy with standard deviation of all trials. We donate this setting as UCF∗ and HMDB∗ in Tab. 4. The second evaluation setting is directly evaluating on the whole dataset, which is suitable for methods pre-trained purely on other datasets (Brattoli et al., 2020; Wang et al., 2021; Lin et al., 2022). We train MOV only using 400 base classes subsampled from Kinectis-700, with video, flow and text. For evaluating on UCF and HMDB, we also use the same three modalities. The flow processing follows the same procedure described in Sec. 4.1.
We present a comprehensive comparison in Tab. 4. As in Lin et al. (2022), we list the vision and text encoder and pre-train data used. We compare with three types of state-of-the-art methods: 1) zeroshot video classification approaches (top part), 2) video and language pre-training methods (Miech et al., 2020; Xu et al., 2021; Akbari et al., 2021) (middle part), 3) CLIP adaptation methods (Wang et al., 2021; Ni et al., 2022) (bottom part). Compared to these methods, we find utilizing pretrained vision and language models like CLIP yield much stronger performance. MOV achieves performance gains over CLIP with around 3% on UCF101 and around 6% on HMDB51. Compared with recently proposed adaptation methods like ActionCLIP and X-CLIP, MOV performs 4.2% to 6.7% better on UCF101 and 1.6% to 7.5% better on HMDB51.
4.5 ABLATION STUDY
Multimodal fusion for base classes. As demonstrated in Fig. 1 and Fig. 2, the asymmetrical crossattention mechanism is proposed to improve the generalization to novel classes. Here we justify cross-attention also has the advantage for base classes. Tab. 5 shows, for Kinetics-700, simply using the optical flow as input obtains 54.2% on base classes. When using score fusion, compared with video modality, we observe identical performance on base classes. Equipped with the proposed multimodal cross-attention fusion mechanism, we obtain 2.6% improvement on base classes. For VGGSound, the performance of audio only is quite close to video only, and the score fusion facilitates base classes with a significant 6.5% improvement. Our cross-attention mechanism is able to further improve upon this strong baseline by 0.7%.
Fine-tuning. We fine-tune different layers of the encoder for flow and audio modality and show results in Tab. 6. As mentioned in Sec. 3, we use the same ViT-B/16 encoder and the same initialization weight for video, flow and audio. We iterate choices of fine-tuning the last 1, 3, 6, 9, and all 12 layers and find consistent performance gains with the increasing number of trainable layers on both modalities. Thus, we adopt the setting of fine-tuning all layers for flow and audio modality.
Per-class accuracy analysis. We analyze and interpret class-wise performance differences between MOV and CLIP baseline, which only uses video and text. As illustrated in Fig. 3a, we observe strong gains on classes that require motion understanding, e.g. yawning and long jump. While we also find decreased performance on classes with subtle or ambiguous motions, e.g. look in mirror and geocaching. In Fig. 3b, we observe audio modality can significantly help disambiguate classes sharing similar visual contents, e.g. people nose blowing and people laughing. For classes being difficult in the audio domain, e.g. sloshing water and wind noise, we observe decreased performances.
5 CONCLUSION
We propose a multimodal open-vocabulary video classification method named MOV via adopting pre-trained vision and language models. Motivated by observing drastic performance differences when using video, audio, and optical flow to generalize from base to novel classes, we design a novel asymmetrical cross-modal fusion mechanism to aggregate multimodal information. Extensive experiments on Kinetics, VGGSound, UCF, and HMDB benchmarks demonstrate the effectiveness of our method and the potential of scaling to giant vision and language models.
6 REPRODUCIBILITY STATEMENT
We plan to release our code, dataset splits, and models to facilitate reproducibility. We provided details of our model, data, implementation and experiments in Sec. 3, Sec. 4 and Appendix B. The CLIP model (Radford et al., 2021) and all datasets used in this work (Carreira et al., 2019; Chen et al., 2020; Soomro et al., 2012; Kuehne et al., 2011) are publicly available.
7 ETHICS STATEMENT
The proposed method shows better classification performance on multimodal videos with novel classes on Kinetics, VGGSound, UCF, and HMDB datasets, indicating its potential for real world applications. Our method is built upon vision and language models pre-trained on large-scale data from the internet, which may contain deficiencies and biases. Our models are used only for the purpose of evaluating research ideas. More rigorous studies for bias, fairness, etc., are required before using our models for any other purposes.
A COMPARISON WITH MODALITY-SPECIFIC PRE-TRAINED NETWORKS
As we haved metioned in the introduction, instead of using modality-specfic pre-trained encoder networks or methods (Wang et al., 2016; Hershey et al., 2017), we choose a more straightforward path by directly utilizing the pre-trained vision encoder from VLMs with minimal modifications to deal with optical flow and audio spectrogram. Here we list the experimental results using the audio of VGGSound in Tab. 7 to show the effectiveness of our design choice. All methods only use the audio training data and evaluate on audios. Our MOV based on CLIP’s vision encoder shows competitive performance compared to other audio specific encoders.
B TEMPERATURE TUNING
As described in Sec. 3.4, in addition to fused flow and audio features of {fm,am}, we also incorporate the video feature v extracted from the frozen video backbone to enhance the performance of generalization to novel classes. We denote the probability distribution followed by {pf (j)|qj=1}, {pa(j)|qj=1} and {pv(j)| q j=1} as Df , Da and Dv . In our experiments we find the curve of Dv tends to be much flatter (or have higher information entropy) than Df and Da when the temperatures τv , τf and τa are all set to the CLIP’s default value of 0.01. Neglecting this difference and combining the scores as in Eq. 12 would lead to poor performance. We address this problem by lowering τv so that the distribution of Dv would be more similar to Df and Da (or having similar information entropy). As shown in Tab. 8, adjusting τv to 0.003 while keeping τf and τa as 0.01 greatly improves the performance by 20.1% on Kinetics-700 and 15.8% on VGGSound.
C DISCUSSION ON GENERALIZED OPEN-VOCABULARY PREDICTION
Our model adopt different inference paths for base and novel classes. The evaluation setting of dividing classes into base and novel is a very common practice in existing open-vocabulary literature (Zhou et al., 2021; 2022; Gu et al., 2022; Ghiasi et al., 2021). We follow this established open-vocabulary setting to conduct experiments and evaluate our method.
If label category information isn’t given, evaluating purely on unseen classes is the classic setting of zero-shot evaluation (Xian et al., 2018). We benchmark our method in this zero-shot setting in Sec. 4.4 Cross-Dataset Transfer. Our method achieves state-of-the-art performance on commonly used UCF and HMDB zero-shot video classification benchmarks.
Here we consider another setting of generalized open-vocabulary prediction where we train our model on base classes but the model doesn’t know whether a class is from base or not during inference. A simple solution is to treat all classes as novel (i.e., use only the “Novel Class Prediction” path illustrated in Fig. 2). We conduct such experiment on Kinetics-700 by training MOV on 400 base classes and evaluating on all 700 classes by treating all of them as novel classes. In this scenario, we observe detrimental performances for both our method MOV and the CLIP baseline. Since the number of classes is 2× more (300 to 700), we consider it a reasonable result. MOV improves upon CLIP in both original (+1.4%) and generalized (+0.7%) open-vocabulary settings for predicting novel classes. | 1. What is the focus of the paper in terms of multi-modal video classification?
2. What are the strengths of the proposed approach regarding clarity and reproducibility?
3. What are the weaknesses of the paper regarding novelty and sensitivity to hyperparameter tuning?
4. Do you have any concerns regarding the performance of the model and its dependence on carefully selected parameters?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors proposed a new model for multi-modal open-vocabulary video classification, by employing information from optical flow and spectrogram image. Experiments were conducted on the datasets K700 and VGGsound datasets. Improvements over baselines were achieved.
Strengths And Weaknesses
Strength
The models are presented clearly. Details are also included. It should be easy to reproduce the results.
Weaknesses
The proposed network is a combination of several techniques. And its goal is just to fuse information from all modalities. Thus, it seems to the reviewer that the novelty is limited.
From Table 8, it seems the authors tuned hyper-parameters on the test set. And the performance is very sensitive to the temperature. Without carefully selected parameters, the performance of your model is similar to CLIP. It is possible that the improvements come from carefully tuned parameters, not from the proposed model. The reviewer would like to see response from the authors about this concern.
The performance with only text modality should be provided. Since the dataset is new, it is important to know the performances of basic models.
Clarity, Quality, Novelty And Reproducibility
The paper is easy to follow. The presentations are clear and it should be easy to reproduce the results. |
ICLR | Title
Multimodal Open-Vocabulary Video Classification via Vision and Language Models
Abstract
Utilizing vision and language models (VLMs) pre-trained on internet-scale imagetext pairs is becoming a promising paradigm for open-vocabulary vision tasks. This work conducts an extensive study for multimodal open-vocabulary video classification via pre-trained VLMs by leveraging motion and audio that naturally exist in the video. We design an asymmetrical cross-modal fusion mechanism to aggregate multimodal information differently for video and optical flow / audio. Experiments on Kinetics and VGGSound show that introducing more modalities significantly improves the accuracy on seen classes, while generalizing better to unseen classes over existing approaches. Despite its simplicity, our method achieves state-of-the-art results on UCF and HMDB zero-shot video action recognition benchmarks, significantly outperforming traditional zero-shot techniques, video-text pre-training methods and recent VLM-based approaches. Code and models will be released.
1 INTRODUCTION
Building open-vocabulary models capable of predicting beyond a fixed set of training classes is of crucial importance in computer vision. Recently, vision and language models (VLMs) pre-trained on internet-scale image-text pairs, e.g., CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021), demonstrate impressive transferability on a wide range of vision tasks. Utilizing strong pre-trained VLMs is becoming a promising paradigm for open-vocabulary vision tasks including object detection (Gu et al., 2022) and image segmentation (Ghiasi et al., 2021; Li et al., 2022a).
In this work, we focus on the novel challenging task of multimodal open-vocabulary video classification via pre-trained VLMs. We set up open-vocabulary video benchmarks by utilizing two existing large-scale multimodal video datasets: Kinetics-700 (Carreira et al., 2019) and VGGSound (Chen et al., 2020). Specifically, we constructed two sets of classes: base (seen) and novel (unseen). For base classes, we have both training and testing videos, aiming at helping the pre-trained VLMs adapt to the video domain. While for novel classes, we only have testing videos, mimicking the real-world challenge of open-vocabulary video classification. To the best of our knowledge, we are the first to study how to leverage pre-trained VLMs for multimodal open-vocabulary video classification.
We start with directly fine-tuning the vision encoder of CLIP (Radford et al., 2021) with the language encoder fixed, using the training videos from base classes. As shown in Fig. 1 (a-d), although there is a decent performance gain for base classes, the accuracy for novel classes decreases significantly. This observation corroborates with Zhou et al. (2022) on adapting pre-trained VLMs.
On the other hand, despite rich multimodal contents in internet videos, signals such as audio and motion are less explored in recent open-vocabulary models. This is in stark contrast to the human perception system that heavily relies on multimodal signals (Smith & Gasser, 2005). Can we leverage multimodal information to improve open-vocabulary models?
Instead of using specially designed modality-specific encoders (Wang et al., 2016; Hershey et al., 2017), we choose a more straightforward path by directly utilizing the pre-trained vision encoder from VLMs with minimal modifications to deal with optical flow and audio spectrogram.
We then conduct the same experiments by fine-tuning CLIP’s vision encoder but instead using flow or audio as the input. As shown in Fig. 1 (e-h), surprisingly, we find that fine-tuning on base classes
0 25 50 75 100
50
55
60
65 70 A cc ur ac y (% )
(a) BASE
0 25 50 75 100 26
28
30
32
34
(b) NOVEL
0 25 50 75 100
50
52
54
56
58
60
(c) BASE
0 25 50 75 100
12
14
16
18
20
22
24
26
28
30 (d) NOVEL
0 25 50 75 100 0
6
12
18
24
30
36
42
48
54
(e) BASE
0 25 50 75 100
2
4
6
8
10
12
14
16
(f) NOVEL
0 25 50 75 100 52
53
54
55
56
57
58
59
(g) BASE
0 25 50 75 100
8
10
12
14
16
(h) NOVEL
Training Epochs
Kinetics VIDEO VGGSound VIDEO VGGSound AUDIOKinetics FLOW
Figure 1: Fine-tuning pre-trained CLIP with video, flow and audio modalities. For all three modalities, fine-tuning on labeled base classes leads to significant accuracy improvement (a, c, e, g). However, when evaluating the same model on novel classes, the video modality shows decreasing performance (b, d), while the performance for both flow and audio modality is improving (f, h).
is able to also improve the performance on novel classes. This suggests that we may use flow or audio modality to improve the base to novel generalization of video modality.
In light of these observations, we propose MOV, a simple yet effective method for Multimodal Open-Vocabulary video classification. Fig. 2 shows an overview of our method. In MOV, we design a novel asymmetrical cross-modal fusion mechanism using cross-attention to leverage complementary multimodal information differently for video and optical flow / audio. The core idea is to exploit the strong transferability in the pre-trained vision encoder, while allowing greater flexibility in finetuning flow and audio encoders. MOV is trained using multimodal inputs from base classes and is able to predict both base and novel classes during inference.
We carry out extensive experiments and ablation studies on Kinetics-700 (Carreira et al., 2019) and VGGSound (Chen et al., 2020). MOV shows clear improvements over CLIP (Radford et al., 2021), recent CLIP adaptation techniques (Zhou et al., 2021; Gao et al., 2021), as well as videotext pre-training methods (Akbari et al., 2021) on both base and novel classes. MOV also achieves state-of-the-art results on UCF and HMDB zero-shot video action recognition benchmarks, significantly outperforming traditional zero-shot methods, state-of-the-art VLM adaption techniques, and a variety of video-text pre-training approaches. Furthermore, MOV is scalable with much stronger backbones, indicating its potential to be incorporated with large vision and language models.
2 RELATED WORK
Vision and language models. Learning a joint embedding space from vision and language modalities has been extensively studied during the past decade. Early works usually first encode two modalities separately, using hand-crafted descriptors (Elhoseiny et al., 2013) or deep networks (Lei Ba et al., 2015) for images, and skip-gram text models for languages (Frome et al., 2013). The crossmodality alignment is then achieved by metric learning (Frome et al., 2013) or language concepts (Li et al., 2017). Recently, learning vision and language modalities jointly through contrastive learning (Hadsell et al., 2006; Oord et al., 2018) becomes a promising direction. Impressive performance has been achieved by utilizing stronger encoders for vision (Dosovitskiy et al., 2021), language (Vaswani et al., 2017) and web-scale pre-training data (Hinton et al., 2015; Radford et al., 2021). CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) are two representative approaches which show strong zero-shot 1 performance on various downstream tasks. Despite this strong baseline, adapting pre-trained VLMs to specific vision domains in a more effective way remains critical and is being actively studied. Examples abound, including image classification (Zhou et al., 2021; 2022; Gao et al., 2021), object detection (Gu et al., 2022; Zhong et al., 2022; Kamath et al., 2021; Li et al., 2022b), image segmentation (Ghiasi et al., 2021; Li et al., 2022a), audio classification (Guzhov et al., 2022) and video action recognition (Wang et al., 2021; Ju et al., 2021; Ni et al., 2022). Our method extends the existing research by adapting pre-trained VLMs to multimodal video and investigating the impact of additional input modalities like flow and audio.
1We use the term “zero-shot” when we need to align with settings described in some existing works. Otherwise, we would use “open-vocabulary” which we believe is a more precise term.
Open-vocabulary video classification. Zero-shot or open-vocabulary video action recognition is a representative task in this domain. Similar to early works of vision and language learning, the video input and labeled texts are encoded with modality-specific pre-trained models such as S3D (Xie et al., 2018), R(2+1)D (Tran et al., 2018) for video, and Word2Vec (Mikolov et al., 2013) for text. Since the generated video and text embeddings are not aligned, various methods have been proposed to bridge the gap by mapping two modalities into a joint embedding space (Wang & Chen, 2017; Chen & Huang, 2021; Gao et al., 2019; Wu et al., 2016; Xu et al., 2016; Zhu et al., 2018), mapping vision modality to language space (Bishay et al., 2019; Brattoli et al., 2020; Hahn et al., 2019; Xu et al., 2017) or mapping language modality to vision space (Mandal et al., 2019; Zhang & Peng, 2018). These joint embedding mapping methods are further extended to audiovisual classification (Mercea et al., 2022; Mazumder et al., 2021; Parida et al., 2020). Our approach shows that we can significantly improve the performance of open-vocabulary video classification by leveraging strong pre-trained VLMs and other modalities like flow and audio. To our knowledge, this has not been done by prior works in this field.
Mutlimodal fusion for video. Videos are a natural source of multimodal data including motion and audio. Two-stream networks is used to model video and optical flow simultaneously for action recognition (Simonyan & Zisserman, 2014; Wang et al., 2016; Feichtenhofer et al., 2016; 2017). Late fusion is adopted (Simonyan & Zisserman, 2014; Wang et al., 2016) and then thoroughly studied (Feichtenhofer et al., 2016; 2017) on how to better perform spatio-temporal fusion from two modalities. As in the domain of audiovisual fusion, early methods (Chen & Rao, 1998) usually adopt straightforward score fusion or stacking input data for early fusion. Later research (Kazakos et al., 2019; Xiao et al., 2020; Fayek & Kumar, 2020; Nagrani et al., 2021; Chen & Ho, 2022; Chen et al., 2021; Zhao et al., 2022) focus on developing better mid or late fusion strategies to improve the final performance. Different from existing works focusing on a fixed set of classes, we use multimodal fusion to help open-vocabulary video models generalize better to novel classes.
3 METHOD
An overview of our proposed method is shown in Fig. 2. We next describe each component.
3.1 MODALITY-SPECIFIC ENCODING
Given a pre-trained vision and language model, e.g., CLIP (Radford et al., 2021), we denote its vision encoder as h(·|θh) and its language encoder as g(·|θg). For a multimodal video input, we sample N RGB frames V and calculate the corresponding optical flow images F , resulting in V = {v1, v2, . . . , vN} and F = {f1, f2, . . . , fN}. We also generate the spectrogram image A from the raw audio waveform. More implementation details can be found in Sec. 4. We use the same encoder architecture h(·|·) to extract feature representations for video, flow and audio modalities, denoted as hv(·|θv), hf (·|θf ), and ha(·|θa) respectively. Model parameters θv , θf and θa are all initialized with the weight θh from the pre-trained VLM’s vision encoder. Apart from being simple and easy to implement, this design has two additional advantages: 1) the performance of adopting the pre-trained VLM’s vision encoder to other modalities is competitive against in-domain methods (a detailed study in Appendix A); 2) the vision encoder is trained to align with the language encoder, potentially helping the generalization from base to novel classes. We encode each modality separately as:
v = hv(V |θv), f = hf (F |θf ), a = ha(A|θa), (1)
where v and f are features fromN frames, and a is the representation of a single spectrogram image.
To better aggregate the temporal features of video and flow modalities, we attach temporal fusion networks φv(·) and φf (·), consisting of L transformer layers each, on top of hv(·|θv) and hf (·|θf ). We denote the input of the l-th transformer layer as zl and the input z0 can be either v or f . Then the forward pass of the l-th layer in φv(·) and φf (·) can be formulated as:
yl = MSA(LN(zl)) + zl, (2)
zl+1 = MLP(LN(yl)) + yl, (3)
where LN stands for layer normalization, MSA represents multi-head self-attention, and MLP means multi-layer perceptron. For audio feature a, we simply attach an MLP module upon the backbone. We obtain the temporally fused features as:
vt = φv(v), ft = φf (f), at = MLP(a). (4)
Finally, for the text modality, suppose we have p base classes with labels. We fill each of the class names into 28 video classification prompts provided by CLIP (Radford et al., 2021) like “a video of a person doing {class name}” and then encode the sentence using the pre-trained language encoder g(·|θg) from VLM. The embedding of each class is averaged over all templates denoted as {Bi}pi=1.
3.2 ASYMMETRICAL MULTIMODAL FUSION
We adopt an asymmetrical cross-attention mechanism to fuse mutlimodal features. Without loss of generality, as shown in Fig. 2, our method described here is for fusing one of {flow, audio}modality with video modality. The algorithm can be easily extended to fusing video with more modalities.
For the video modality, we extract the information from other modalities to enhance the performance of video feature. Thus we use vt as the input for attention query, and ft or at from the other modality as the input for attention key and value. The fused multimodal video feature vm can be written as:
vt = MCA(LN(vt),LN(xt)) + vt, xt ∈ {ft,at}, (5) vm = AvgPool ( MLP(LN(vt)) + vt ) , (6)
where MCA denotes multi-head cross-attention, AvgPool denotes temporal average pooling.
For the audio and flow modalities, we adopt an asymmetrical design aiming at incorporating the information from video modality to enhance the generalization ability of the feature to novel classes. Since the video temporal fusion network φv(·) for generating the video feature vt are trained on base classes, vt losses the generalization ability to novel classes (shown in Fig. 1). Therefore we choose to directly use the frozen vision encoder’s output v instead of vt for better generalization to novel classes. We obtain the fused multimodal flow and audio feature fm and am as:
ft = MCA(LN(ft),LN(v)) + ft, at = MCA(LN(at),LN(v)) + at, (7) fm = AvgPool ( MLP(LN(ft)) + ft ) , am = AvgPool ( MLP(LN(at)) + at ) . (8)
3.3 TRAINING AND INFERENCE ON BASE CLASSES
During training, each input multimodal video has a corresponding label y belonging to the base classes. We would optimize different modalities simultaneously via maximizing the video-text, flow-text and audio-text similarity. The training loss function can be formulated as:
L = α(− log exp(sim(vm,By)/τ)∑p i=1 exp(sim(vm,Bi)/τ) ) + (1− α)(− log exp(sim(xm,By)/τ)∑p i=1 exp(sim(xm,Bi)/τ) ), (9)
where xm ∈ {fm,am} is the final fused flow or audio feature, α is the weight for balancing two loss terms, sim(·, ·) is the cosine similarity, τ is a pre-defined temperature parameter. During training, we freeze the video encoder and the text encoder to retain their strong generalization to novel classes and save computation, while for other two modalities flow and audio, we fine-tune the encoder endto-end. An ablation study on fine-tuning different number of layers can be found in Tab. 6.
For inference on base classes, we compute the probability belonging to the j-th class by:
P (j) = exp(sim(vm,Bj)/τ)∑p i=1 exp(sim(vm,Bi)/τ) , j ∈ {1, 2, . . . , p}. (10)
3.4 GENERALIZATION TO NOVEL CLASSES
Similar to base classes, we obtain the text embeddings for novel classes as {Ni}qi=1, where q is the number of novel classes. In addition to fused features fm or am, we also incorporate the video feature v extracted from the frozen video encoder, followed by a temporal average pooling. Similar to Eq. 10, we compute the probability predictions as (here we only show flow modality for simplicity): Pf (j) = exp(sim(fm,Nj)/τf )∑q i=1 exp(sim(fm,Ni)/τf ) , Pv(j) = exp(sim(v,Nj)/(τv)∑q i=1 exp(sim(v,Ni)/(τv)) , j ∈ {1, 2, . . . , q}. (11) We denote the probability distribution followed by {pf (j)|qj=1} and {pv(j)| q j=1} as Df and Dv . In our experiments we find the curve of Dv tends to be much flatter (or have higher information entropy) than Df when the temperatures τv and τf are both set to the CLIP’s default value of 0.01, resulting in poor performance. We find simply lowering τv to 0.003 while keeping τf and τa as 0.01 solves this issue. A detailed ablation study about the temperature can be found in Appendix B.
The final probability predictions for novel classes are calculated by a weighted sum:
P (j) = βPf (j) + (1− β)Pv(j). (12)
4 EXPERIMENTS
4.1 DATASETS
We describe the details of dataset splits for benchmarking multimodal open-vocabulary video classification and preparing flow and audio modalities.
Kinetics-700 (Carreira et al., 2019) splits. Kinetics-700 contains around 650k video clips annotated with 700 human action classes. Apart from the visual appearance, motion plays an important role for distinguishing different action classes. For dataset split, we randomly select 400 classes as base classes and the testing videos of the rest 300 classes are used for novel class evaluation.
Kinetics-700 optical flow. We follow a standard procedure (Xie et al., 2018; Han et al., 2020a;b) to use the TV-L1 algorithm (Zach et al., 2007) to extract optical flow in an unsupervised manner. To accommodate for pre-trained vision encoders, we first truncate the vertical and horizontal motion values to [−20, 20], then append a third all-zero channel. Finally we do a shift and scale transformation to map [−20, 20] to [0, 255]. VGGSound (Chen et al., 2020) splits. VGGSound contains around 200k video clips belonging to a total number of 309 classes. Different from other audiovisual datasets like AudioSet (Gemmeke et al., 2017), VGGSound ensures the source of the sound is visually present inside the same video. Thus we consider this dataset as an excellent test bed for our proposed method. We randomly select 154 base classes for training and leave the rest 155 classes for novel classes evaluation.
VGGSound audio spectrogram. We use the pre-processing practice of audio spectrogram transformer (AST) (Gong et al., 2021) to convert wavforms to spectrogram images. Each raw audio signal is re-sampled to 16kHz and converted to mono channel. We then calculate the log Mel spectrogram with 128 frequency bins. The processing Hamming window is 25ms with a hop length set to 10ms. For t second audio input, the generated 2D spectrogram would have the shape of 128 × 100t. We normalize the spectrogram by subtracting its mean value and dividing its standard deviation.
4.2 IMPLEMENTATION
Data augmentation and tokenization. For each video, we first randomly sample 16 frames with a stride of 4 from the whole video sequence. We then apply the standard image augmentation used on ImageNet (He et al., 2016; 2019) with the same augmentation parameters across all frames to keep temporal consistency (Qian et al., 2021). For optical flow, we follow the practice of Xie et al. (2018) and Han et al. (2020a;b) by directly treating it as images and apply the same augmentation with the video. The augmented output tensors have the shape of (16, 224, 224, 3) from both modalities which can be directly fed into CLIP’s ViT encoder. For audio, we apply specialized augmentations designed for spectrogram following Gong et al. (2021) and Nagrani et al. (2021). As the videos in VGGSound are all 10-second long, the generated spectrogram has a shape of (128, 100 × 10). We first conduct random cropping of (128, 800), sampling all frequency bands with a time duration of 8 seconds. SpecAugment (Park et al., 2019) is applied subsequently with a time masking range of 192 frames and frequency masking of 48 bins. Finally, to accommodate this single channel output with the pre-trained tokenization layer, we make two necessary changes as in Gong et al. (2021): 1) expanding the spectrogram to three duplicated channels, 2) bilinearly interpolating the original positional encoding for spectrogram images with a different spatial resolution.
Network architecture. We adopt CLIP’s ViT encoder for video, flow, and audio and the transformer encoder for text. We use 2 transformer layers for temporal fusion (L = 2) and 1 transformer layer for cross-attention, each layer has an embedding dimension of 512 and 8 attention heads. For cross-attention, query and key-value inputs use separate layer normalization.
Training hyper-parameters. We adopt the same hyper-parameters for experiments on Kinetics700 and VGGSound except for training epochs. We use a batch size of 1024 on 128 Cloud TPUv3 cores, AdamW (Loshchilov & Hutter, 2017) optimizer with a weight decay of 0.05 and an initial learning rate of 1e-4 followed by half-cosine decay (He et al., 2019). We set the weight in Eq. 9 as α = 0.5. We train 100 epochs on Kinetics-700 and 20 epochs on VGGSound since we observe an overfit using audio modality when trained longer.
Inference hyper-parameters. For video and flow, we use 4×3 views following Arnab et al. (2021) and Liu et al. (2022), where a video is uniformly sampled into 4 clips temporally, and 3 spatial crops for each clip. For audio, we use 12 temporal views without spatial cropping. The final score is averaged over 12 views. For novel classes, we set the weight β in Eq. 12 to 0.25.
4.3 MULTIMODAL OPEN-VOCABULARY VIDEO CLASSIFICATION
We evaluate MOV on Kinetics-700 to utilize modalities of video, optical flow, and text, and on VGGSound to explore the combination of video, audio and text.
Comparison baselines. We evaluate four baselines: 1) CLIP (Radford et al., 2021), which directly encodes the video and class names into embeddings with pre-trained encoders. The final prediction is given by comparing similarity scores between video and text embeddings; 2) CoOp (Zhou et al., 2021), which learns continuous text prompt embeddings instead of manually selected templates for better adaptation to downstream tasks; 3) CLIP-Adapter (Gao et al., 2021), which attaches adapter heads to both video and text encoder; 4) VATT (Akbari et al., 2021), which is a state-of-the-art multimodal video pre-trainning method and can do zero-shot inference for video classification. We use the same datasets, backbone and hyper-parameters as ours introduced in Sec. 4.2 to train (CLIP and VATT do not require training) and evaluate all methods.
Results. Tab. 1 shows results on Kinetics-700. Both CoOp and CLIP-Adapter achieve better performance than CLIP on base class prediction. While for novel classes, we observe a large accuracy drop compared with CLIP. The degraded performance in harmonic mean of these two methods indicates their loss of the generalization ability on novel classes outweigh their improvement on base classes.
Our method outperforms CLIP-Adapter by 8.8% on base classes, demonstrating the effectiveness of leveraging multimodal information. On novel classes, we observe an improvement of 1.4% over CLIP, indicating that bringing in flow modality improves the generalization of the video model.
We observe similar trends on VGGSound in Tab. 2. CoOp and CLIP-Adapter improve base classes but fail to generalize to novel classes, resulting in a lower harmonic mean of accuracy compared with CLIP. MOV, when fused with rich audio information, obtains a performance gain of 2.7% over CLIP on novel classes. We also conduct an additional study of generalized open-vocabulary prediction in Appendix C, where the information of whether a class is from base or novel is not known.
Backbone scaling. It is also important to analyze the scalability of MOV with stronger backbones. We experiment with the largest ViT-L/14 model released by CLIP as the vision encoder and a text encoder with embedding dimension increased to 768 and attention heads increased to 12. ViT-L/14 contains 3× more parameters than ViT-B/16 and we observe around 8% improvement on direct CLIP zero-shot evaluation on Kinetics-700 and 5% improvement on VGGSound, as indicated in the first 2 rows of Tab. 3. MOV is able to preserve the performance gain brought by using stronger CLIP models (last 2 rows of Tab. 3). Despite the significantly stronger CLIP baseline, MOV still improves 20.5% and 1.6% on Kinetics-700, and 19.3% and 2.0% on VGGSound, when comparing row 2 and row 4 of Tab. 3. The scaling performance shows that MOV has a great potential to be incorporated into recent giant vision and language models (Yuan et al., 2021; Yu et al., 2022).
4.4 CROSS-DATASET TRANSFER
Pre-training an open-vocabulary or zero-shot video classification model on large datasets like Kinetics (Carreira et al., 2019), ImageNet (Deng et al., 2009) or Sports-1M (Karpathy et al., 2014) and evaluating on UCF101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011) is the most
GA (Mishra et al., 2018) C3D + W2V S1M 17.3±1.1 / - 19.3±2.1 / - TARN (Bishay et al., 2019) C3D + W2V S1M 19.0±2.3 / - 19.5±4.2 / - CWEGAN (Mandal et al., 2019) I3D + W2V IN, K400 26.9±2.8 / - 30.2±2.7 / - TS-GCN (Gao et al., 2019) GLNet + W2V IN-shuffle 34.2±3.1 / - 23.2±3.0 / - PS-GNN (Gao et al., 2020) GLNet + W2V IN-shuffle 36.1±4.8 / - 25.9±4.1 / - E2E (Brattoli et al., 2020) R(2+1)D + W2V K700 48.0 / 37.6 32.7 / 26.9 DASZL (Kim et al., 2021) TSM + Attributes IN, K400 48.9±5.8 / - - / - ER (Chen & Huang, 2021) TSM + BERT IN, K400 51.8±2.9 / - 35.3±4.6 / - ResT (Lin et al., 2022) RN101 + W2V K700 58.7±3.3 / 40.6 41.1±3.7 / 34.4 MIL-NCE (Miech et al., 2020) S3D + W2V HT100M - / 29.3 - / 10.4 VideoCLIP (Xu et al., 2021) S3D + TSF HT100M - / 22.5 - / 11.3 VATT (Akbari et al., 2021) ViT + TSF HT100M - / 18.4 - / 13.2 CLIP (Radford et al., 2021) ViT-B/16 + TSF WIT 79.9±3.8 / 73.0 54.0±4.1 / 46.1 ActionCLIP (Wang et al., 2021) ViT-B/16 + TSF WIT+ - / 69.5 - / 50.5 X-CLIP (Ni et al., 2022) ViT-B/16 + TSF WIT+ - / 72.0 - / 44.6 MOV (Ours) ViT-B/16 + TSF WIT+ 82.6±4.1 / 76.2 60.8±2.8 / 52.1 MOV (Ours) ViT-L/14 + TSF WIT+ 87.1±3.2 / 80.9 64.7±3.2 / 57.8 † vision encoder: C3D (Tran et al., 2015), I3D (Carreira & Zisserman, 2017), GLNet (Szegedy et al., 2015), R(2+1)D (Tran et al., 2018), TSM (Lin et al., 2019), RN101 (He et al., 2016), S3D (Xie et al., 2018), ViT (Dosovitskiy et al., 2021). ‡ text encoder: W2V (Mikolov et al., 2013), BERT (Devlin et al., 2019), TSF (Vaswani et al., 2017). § pre-train data: S1M (Karpathy et al., 2014), IN (Deng et al., 2009), K400 (Kay et al., 2017), IN-shuffle (Mettes et al., 2016), K700 (Carreira et al., 2019), HT100M (Radford et al., 2021), WIT (Radford et al., 2021), WIT+ has additional training on Kinetics.
common paradigm in the literature. Two settings are used for performance evaluation (Brattoli et al., 2020). The first is randomly choosing half of the classes in the test set and evaluate on the selected subset. To alleviate fluctuations caused by randomness, the evaluation is conducted independently for 10 times and we report the mean accuracy with standard deviation of all trials. We donate this setting as UCF∗ and HMDB∗ in Tab. 4. The second evaluation setting is directly evaluating on the whole dataset, which is suitable for methods pre-trained purely on other datasets (Brattoli et al., 2020; Wang et al., 2021; Lin et al., 2022). We train MOV only using 400 base classes subsampled from Kinectis-700, with video, flow and text. For evaluating on UCF and HMDB, we also use the same three modalities. The flow processing follows the same procedure described in Sec. 4.1.
We present a comprehensive comparison in Tab. 4. As in Lin et al. (2022), we list the vision and text encoder and pre-train data used. We compare with three types of state-of-the-art methods: 1) zeroshot video classification approaches (top part), 2) video and language pre-training methods (Miech et al., 2020; Xu et al., 2021; Akbari et al., 2021) (middle part), 3) CLIP adaptation methods (Wang et al., 2021; Ni et al., 2022) (bottom part). Compared to these methods, we find utilizing pretrained vision and language models like CLIP yield much stronger performance. MOV achieves performance gains over CLIP with around 3% on UCF101 and around 6% on HMDB51. Compared with recently proposed adaptation methods like ActionCLIP and X-CLIP, MOV performs 4.2% to 6.7% better on UCF101 and 1.6% to 7.5% better on HMDB51.
4.5 ABLATION STUDY
Multimodal fusion for base classes. As demonstrated in Fig. 1 and Fig. 2, the asymmetrical crossattention mechanism is proposed to improve the generalization to novel classes. Here we justify cross-attention also has the advantage for base classes. Tab. 5 shows, for Kinetics-700, simply using the optical flow as input obtains 54.2% on base classes. When using score fusion, compared with video modality, we observe identical performance on base classes. Equipped with the proposed multimodal cross-attention fusion mechanism, we obtain 2.6% improvement on base classes. For VGGSound, the performance of audio only is quite close to video only, and the score fusion facilitates base classes with a significant 6.5% improvement. Our cross-attention mechanism is able to further improve upon this strong baseline by 0.7%.
Fine-tuning. We fine-tune different layers of the encoder for flow and audio modality and show results in Tab. 6. As mentioned in Sec. 3, we use the same ViT-B/16 encoder and the same initialization weight for video, flow and audio. We iterate choices of fine-tuning the last 1, 3, 6, 9, and all 12 layers and find consistent performance gains with the increasing number of trainable layers on both modalities. Thus, we adopt the setting of fine-tuning all layers for flow and audio modality.
Per-class accuracy analysis. We analyze and interpret class-wise performance differences between MOV and CLIP baseline, which only uses video and text. As illustrated in Fig. 3a, we observe strong gains on classes that require motion understanding, e.g. yawning and long jump. While we also find decreased performance on classes with subtle or ambiguous motions, e.g. look in mirror and geocaching. In Fig. 3b, we observe audio modality can significantly help disambiguate classes sharing similar visual contents, e.g. people nose blowing and people laughing. For classes being difficult in the audio domain, e.g. sloshing water and wind noise, we observe decreased performances.
5 CONCLUSION
We propose a multimodal open-vocabulary video classification method named MOV via adopting pre-trained vision and language models. Motivated by observing drastic performance differences when using video, audio, and optical flow to generalize from base to novel classes, we design a novel asymmetrical cross-modal fusion mechanism to aggregate multimodal information. Extensive experiments on Kinetics, VGGSound, UCF, and HMDB benchmarks demonstrate the effectiveness of our method and the potential of scaling to giant vision and language models.
6 REPRODUCIBILITY STATEMENT
We plan to release our code, dataset splits, and models to facilitate reproducibility. We provided details of our model, data, implementation and experiments in Sec. 3, Sec. 4 and Appendix B. The CLIP model (Radford et al., 2021) and all datasets used in this work (Carreira et al., 2019; Chen et al., 2020; Soomro et al., 2012; Kuehne et al., 2011) are publicly available.
7 ETHICS STATEMENT
The proposed method shows better classification performance on multimodal videos with novel classes on Kinetics, VGGSound, UCF, and HMDB datasets, indicating its potential for real world applications. Our method is built upon vision and language models pre-trained on large-scale data from the internet, which may contain deficiencies and biases. Our models are used only for the purpose of evaluating research ideas. More rigorous studies for bias, fairness, etc., are required before using our models for any other purposes.
A COMPARISON WITH MODALITY-SPECIFIC PRE-TRAINED NETWORKS
As we haved metioned in the introduction, instead of using modality-specfic pre-trained encoder networks or methods (Wang et al., 2016; Hershey et al., 2017), we choose a more straightforward path by directly utilizing the pre-trained vision encoder from VLMs with minimal modifications to deal with optical flow and audio spectrogram. Here we list the experimental results using the audio of VGGSound in Tab. 7 to show the effectiveness of our design choice. All methods only use the audio training data and evaluate on audios. Our MOV based on CLIP’s vision encoder shows competitive performance compared to other audio specific encoders.
B TEMPERATURE TUNING
As described in Sec. 3.4, in addition to fused flow and audio features of {fm,am}, we also incorporate the video feature v extracted from the frozen video backbone to enhance the performance of generalization to novel classes. We denote the probability distribution followed by {pf (j)|qj=1}, {pa(j)|qj=1} and {pv(j)| q j=1} as Df , Da and Dv . In our experiments we find the curve of Dv tends to be much flatter (or have higher information entropy) than Df and Da when the temperatures τv , τf and τa are all set to the CLIP’s default value of 0.01. Neglecting this difference and combining the scores as in Eq. 12 would lead to poor performance. We address this problem by lowering τv so that the distribution of Dv would be more similar to Df and Da (or having similar information entropy). As shown in Tab. 8, adjusting τv to 0.003 while keeping τf and τa as 0.01 greatly improves the performance by 20.1% on Kinetics-700 and 15.8% on VGGSound.
C DISCUSSION ON GENERALIZED OPEN-VOCABULARY PREDICTION
Our model adopt different inference paths for base and novel classes. The evaluation setting of dividing classes into base and novel is a very common practice in existing open-vocabulary literature (Zhou et al., 2021; 2022; Gu et al., 2022; Ghiasi et al., 2021). We follow this established open-vocabulary setting to conduct experiments and evaluate our method.
If label category information isn’t given, evaluating purely on unseen classes is the classic setting of zero-shot evaluation (Xian et al., 2018). We benchmark our method in this zero-shot setting in Sec. 4.4 Cross-Dataset Transfer. Our method achieves state-of-the-art performance on commonly used UCF and HMDB zero-shot video classification benchmarks.
Here we consider another setting of generalized open-vocabulary prediction where we train our model on base classes but the model doesn’t know whether a class is from base or not during inference. A simple solution is to treat all classes as novel (i.e., use only the “Novel Class Prediction” path illustrated in Fig. 2). We conduct such experiment on Kinetics-700 by training MOV on 400 base classes and evaluating on all 700 classes by treating all of them as novel classes. In this scenario, we observe detrimental performances for both our method MOV and the CLIP baseline. Since the number of classes is 2× more (300 to 700), we consider it a reasonable result. MOV improves upon CLIP in both original (+1.4%) and generalized (+0.7%) open-vocabulary settings for predicting novel classes. | 1. What is the main contribution of the paper regarding the utilization of pre-trained vision and text encoders for multimodal tasks?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its simplicity and novelty compared to other works?
3. Do you have any concerns or suggestions regarding the experimental design and comparisons with other studies?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes the use of pre-trained vision and text encoders for the task of generalization to novel classes for the video tasks that contain video, language and audio modalities. The paper proposes a simple architecture to perform temporal fusion for audio and videos and uses a simple cross-attention mechanism to output predictions for base and novel classes. Additionally, the paper performs video-text, text-audio, video-audio alignment to further boost the results on novel classes. The paper performs experiments on 4 different benchmarks and show large gains compared to the existing methods.
Strengths And Weaknesses
The paper make use of the vision and language models (CLIP) pre-trained on large scale image-text pairs. This alone, as expected, can be very helpful for the novel classes in video benchmarks. To utilize image backbone for video, audio modalities, the paper propose a simple architecture to perform temporal fusion and uses cross-attention mechanism to learn from different modalities for both base and novel classes. The experiments show up to 10% improvement in several benchmarks with multimodal data.
On the other hand, while being experimentally sufficient, the paper does not introduce a novel method. For example, alignment with different modalities have been done by some other existing studies. Similarly, cross-attention has been used many times previously. Additionally, the paper is not very well written. It has many typos. For example, in many places you can find mutlimodal rather than multimodal.
Did the authors consider the Merlot-Reserve paper for their study? It would be nice to compare their results to Merlot-Reserve model that works on audio, video, and language modalities. It seems that the paper does not cite neither Merlot nor Merlot-Reserve papers and they are highly related.
Also, can the authors comment about potential overlap when considering novel and base classes and the pre-trained vision and text encoders?
Clarity, Quality, Novelty And Reproducibility
This paper proposes the use of pre-trained vision and text encoders for the task of generalization to novel classes for the video tasks that contain video, language and audio modalities. The paper proposes a simple architecture to perform temporal fusion for audio and videos and uses a simple cross-attention mechanism to output predictions for base and novel classes. Additionally, the paper performs video-text, text-audio, video-audio alignment to further boost the results on novel classes. The paper performs experiments on 4 different benchmarks and show large gains compared to the existing methods.
The paper mentions that the code and models will be released. It would be nice if the authors can provide more details about it. |
ICLR | Title
Multimodal Open-Vocabulary Video Classification via Vision and Language Models
Abstract
Utilizing vision and language models (VLMs) pre-trained on internet-scale imagetext pairs is becoming a promising paradigm for open-vocabulary vision tasks. This work conducts an extensive study for multimodal open-vocabulary video classification via pre-trained VLMs by leveraging motion and audio that naturally exist in the video. We design an asymmetrical cross-modal fusion mechanism to aggregate multimodal information differently for video and optical flow / audio. Experiments on Kinetics and VGGSound show that introducing more modalities significantly improves the accuracy on seen classes, while generalizing better to unseen classes over existing approaches. Despite its simplicity, our method achieves state-of-the-art results on UCF and HMDB zero-shot video action recognition benchmarks, significantly outperforming traditional zero-shot techniques, video-text pre-training methods and recent VLM-based approaches. Code and models will be released.
1 INTRODUCTION
Building open-vocabulary models capable of predicting beyond a fixed set of training classes is of crucial importance in computer vision. Recently, vision and language models (VLMs) pre-trained on internet-scale image-text pairs, e.g., CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021), demonstrate impressive transferability on a wide range of vision tasks. Utilizing strong pre-trained VLMs is becoming a promising paradigm for open-vocabulary vision tasks including object detection (Gu et al., 2022) and image segmentation (Ghiasi et al., 2021; Li et al., 2022a).
In this work, we focus on the novel challenging task of multimodal open-vocabulary video classification via pre-trained VLMs. We set up open-vocabulary video benchmarks by utilizing two existing large-scale multimodal video datasets: Kinetics-700 (Carreira et al., 2019) and VGGSound (Chen et al., 2020). Specifically, we constructed two sets of classes: base (seen) and novel (unseen). For base classes, we have both training and testing videos, aiming at helping the pre-trained VLMs adapt to the video domain. While for novel classes, we only have testing videos, mimicking the real-world challenge of open-vocabulary video classification. To the best of our knowledge, we are the first to study how to leverage pre-trained VLMs for multimodal open-vocabulary video classification.
We start with directly fine-tuning the vision encoder of CLIP (Radford et al., 2021) with the language encoder fixed, using the training videos from base classes. As shown in Fig. 1 (a-d), although there is a decent performance gain for base classes, the accuracy for novel classes decreases significantly. This observation corroborates with Zhou et al. (2022) on adapting pre-trained VLMs.
On the other hand, despite rich multimodal contents in internet videos, signals such as audio and motion are less explored in recent open-vocabulary models. This is in stark contrast to the human perception system that heavily relies on multimodal signals (Smith & Gasser, 2005). Can we leverage multimodal information to improve open-vocabulary models?
Instead of using specially designed modality-specific encoders (Wang et al., 2016; Hershey et al., 2017), we choose a more straightforward path by directly utilizing the pre-trained vision encoder from VLMs with minimal modifications to deal with optical flow and audio spectrogram.
We then conduct the same experiments by fine-tuning CLIP’s vision encoder but instead using flow or audio as the input. As shown in Fig. 1 (e-h), surprisingly, we find that fine-tuning on base classes
0 25 50 75 100
50
55
60
65 70 A cc ur ac y (% )
(a) BASE
0 25 50 75 100 26
28
30
32
34
(b) NOVEL
0 25 50 75 100
50
52
54
56
58
60
(c) BASE
0 25 50 75 100
12
14
16
18
20
22
24
26
28
30 (d) NOVEL
0 25 50 75 100 0
6
12
18
24
30
36
42
48
54
(e) BASE
0 25 50 75 100
2
4
6
8
10
12
14
16
(f) NOVEL
0 25 50 75 100 52
53
54
55
56
57
58
59
(g) BASE
0 25 50 75 100
8
10
12
14
16
(h) NOVEL
Training Epochs
Kinetics VIDEO VGGSound VIDEO VGGSound AUDIOKinetics FLOW
Figure 1: Fine-tuning pre-trained CLIP with video, flow and audio modalities. For all three modalities, fine-tuning on labeled base classes leads to significant accuracy improvement (a, c, e, g). However, when evaluating the same model on novel classes, the video modality shows decreasing performance (b, d), while the performance for both flow and audio modality is improving (f, h).
is able to also improve the performance on novel classes. This suggests that we may use flow or audio modality to improve the base to novel generalization of video modality.
In light of these observations, we propose MOV, a simple yet effective method for Multimodal Open-Vocabulary video classification. Fig. 2 shows an overview of our method. In MOV, we design a novel asymmetrical cross-modal fusion mechanism using cross-attention to leverage complementary multimodal information differently for video and optical flow / audio. The core idea is to exploit the strong transferability in the pre-trained vision encoder, while allowing greater flexibility in finetuning flow and audio encoders. MOV is trained using multimodal inputs from base classes and is able to predict both base and novel classes during inference.
We carry out extensive experiments and ablation studies on Kinetics-700 (Carreira et al., 2019) and VGGSound (Chen et al., 2020). MOV shows clear improvements over CLIP (Radford et al., 2021), recent CLIP adaptation techniques (Zhou et al., 2021; Gao et al., 2021), as well as videotext pre-training methods (Akbari et al., 2021) on both base and novel classes. MOV also achieves state-of-the-art results on UCF and HMDB zero-shot video action recognition benchmarks, significantly outperforming traditional zero-shot methods, state-of-the-art VLM adaption techniques, and a variety of video-text pre-training approaches. Furthermore, MOV is scalable with much stronger backbones, indicating its potential to be incorporated with large vision and language models.
2 RELATED WORK
Vision and language models. Learning a joint embedding space from vision and language modalities has been extensively studied during the past decade. Early works usually first encode two modalities separately, using hand-crafted descriptors (Elhoseiny et al., 2013) or deep networks (Lei Ba et al., 2015) for images, and skip-gram text models for languages (Frome et al., 2013). The crossmodality alignment is then achieved by metric learning (Frome et al., 2013) or language concepts (Li et al., 2017). Recently, learning vision and language modalities jointly through contrastive learning (Hadsell et al., 2006; Oord et al., 2018) becomes a promising direction. Impressive performance has been achieved by utilizing stronger encoders for vision (Dosovitskiy et al., 2021), language (Vaswani et al., 2017) and web-scale pre-training data (Hinton et al., 2015; Radford et al., 2021). CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) are two representative approaches which show strong zero-shot 1 performance on various downstream tasks. Despite this strong baseline, adapting pre-trained VLMs to specific vision domains in a more effective way remains critical and is being actively studied. Examples abound, including image classification (Zhou et al., 2021; 2022; Gao et al., 2021), object detection (Gu et al., 2022; Zhong et al., 2022; Kamath et al., 2021; Li et al., 2022b), image segmentation (Ghiasi et al., 2021; Li et al., 2022a), audio classification (Guzhov et al., 2022) and video action recognition (Wang et al., 2021; Ju et al., 2021; Ni et al., 2022). Our method extends the existing research by adapting pre-trained VLMs to multimodal video and investigating the impact of additional input modalities like flow and audio.
1We use the term “zero-shot” when we need to align with settings described in some existing works. Otherwise, we would use “open-vocabulary” which we believe is a more precise term.
Open-vocabulary video classification. Zero-shot or open-vocabulary video action recognition is a representative task in this domain. Similar to early works of vision and language learning, the video input and labeled texts are encoded with modality-specific pre-trained models such as S3D (Xie et al., 2018), R(2+1)D (Tran et al., 2018) for video, and Word2Vec (Mikolov et al., 2013) for text. Since the generated video and text embeddings are not aligned, various methods have been proposed to bridge the gap by mapping two modalities into a joint embedding space (Wang & Chen, 2017; Chen & Huang, 2021; Gao et al., 2019; Wu et al., 2016; Xu et al., 2016; Zhu et al., 2018), mapping vision modality to language space (Bishay et al., 2019; Brattoli et al., 2020; Hahn et al., 2019; Xu et al., 2017) or mapping language modality to vision space (Mandal et al., 2019; Zhang & Peng, 2018). These joint embedding mapping methods are further extended to audiovisual classification (Mercea et al., 2022; Mazumder et al., 2021; Parida et al., 2020). Our approach shows that we can significantly improve the performance of open-vocabulary video classification by leveraging strong pre-trained VLMs and other modalities like flow and audio. To our knowledge, this has not been done by prior works in this field.
Mutlimodal fusion for video. Videos are a natural source of multimodal data including motion and audio. Two-stream networks is used to model video and optical flow simultaneously for action recognition (Simonyan & Zisserman, 2014; Wang et al., 2016; Feichtenhofer et al., 2016; 2017). Late fusion is adopted (Simonyan & Zisserman, 2014; Wang et al., 2016) and then thoroughly studied (Feichtenhofer et al., 2016; 2017) on how to better perform spatio-temporal fusion from two modalities. As in the domain of audiovisual fusion, early methods (Chen & Rao, 1998) usually adopt straightforward score fusion or stacking input data for early fusion. Later research (Kazakos et al., 2019; Xiao et al., 2020; Fayek & Kumar, 2020; Nagrani et al., 2021; Chen & Ho, 2022; Chen et al., 2021; Zhao et al., 2022) focus on developing better mid or late fusion strategies to improve the final performance. Different from existing works focusing on a fixed set of classes, we use multimodal fusion to help open-vocabulary video models generalize better to novel classes.
3 METHOD
An overview of our proposed method is shown in Fig. 2. We next describe each component.
3.1 MODALITY-SPECIFIC ENCODING
Given a pre-trained vision and language model, e.g., CLIP (Radford et al., 2021), we denote its vision encoder as h(·|θh) and its language encoder as g(·|θg). For a multimodal video input, we sample N RGB frames V and calculate the corresponding optical flow images F , resulting in V = {v1, v2, . . . , vN} and F = {f1, f2, . . . , fN}. We also generate the spectrogram image A from the raw audio waveform. More implementation details can be found in Sec. 4. We use the same encoder architecture h(·|·) to extract feature representations for video, flow and audio modalities, denoted as hv(·|θv), hf (·|θf ), and ha(·|θa) respectively. Model parameters θv , θf and θa are all initialized with the weight θh from the pre-trained VLM’s vision encoder. Apart from being simple and easy to implement, this design has two additional advantages: 1) the performance of adopting the pre-trained VLM’s vision encoder to other modalities is competitive against in-domain methods (a detailed study in Appendix A); 2) the vision encoder is trained to align with the language encoder, potentially helping the generalization from base to novel classes. We encode each modality separately as:
v = hv(V |θv), f = hf (F |θf ), a = ha(A|θa), (1)
where v and f are features fromN frames, and a is the representation of a single spectrogram image.
To better aggregate the temporal features of video and flow modalities, we attach temporal fusion networks φv(·) and φf (·), consisting of L transformer layers each, on top of hv(·|θv) and hf (·|θf ). We denote the input of the l-th transformer layer as zl and the input z0 can be either v or f . Then the forward pass of the l-th layer in φv(·) and φf (·) can be formulated as:
yl = MSA(LN(zl)) + zl, (2)
zl+1 = MLP(LN(yl)) + yl, (3)
where LN stands for layer normalization, MSA represents multi-head self-attention, and MLP means multi-layer perceptron. For audio feature a, we simply attach an MLP module upon the backbone. We obtain the temporally fused features as:
vt = φv(v), ft = φf (f), at = MLP(a). (4)
Finally, for the text modality, suppose we have p base classes with labels. We fill each of the class names into 28 video classification prompts provided by CLIP (Radford et al., 2021) like “a video of a person doing {class name}” and then encode the sentence using the pre-trained language encoder g(·|θg) from VLM. The embedding of each class is averaged over all templates denoted as {Bi}pi=1.
3.2 ASYMMETRICAL MULTIMODAL FUSION
We adopt an asymmetrical cross-attention mechanism to fuse mutlimodal features. Without loss of generality, as shown in Fig. 2, our method described here is for fusing one of {flow, audio}modality with video modality. The algorithm can be easily extended to fusing video with more modalities.
For the video modality, we extract the information from other modalities to enhance the performance of video feature. Thus we use vt as the input for attention query, and ft or at from the other modality as the input for attention key and value. The fused multimodal video feature vm can be written as:
vt = MCA(LN(vt),LN(xt)) + vt, xt ∈ {ft,at}, (5) vm = AvgPool ( MLP(LN(vt)) + vt ) , (6)
where MCA denotes multi-head cross-attention, AvgPool denotes temporal average pooling.
For the audio and flow modalities, we adopt an asymmetrical design aiming at incorporating the information from video modality to enhance the generalization ability of the feature to novel classes. Since the video temporal fusion network φv(·) for generating the video feature vt are trained on base classes, vt losses the generalization ability to novel classes (shown in Fig. 1). Therefore we choose to directly use the frozen vision encoder’s output v instead of vt for better generalization to novel classes. We obtain the fused multimodal flow and audio feature fm and am as:
ft = MCA(LN(ft),LN(v)) + ft, at = MCA(LN(at),LN(v)) + at, (7) fm = AvgPool ( MLP(LN(ft)) + ft ) , am = AvgPool ( MLP(LN(at)) + at ) . (8)
3.3 TRAINING AND INFERENCE ON BASE CLASSES
During training, each input multimodal video has a corresponding label y belonging to the base classes. We would optimize different modalities simultaneously via maximizing the video-text, flow-text and audio-text similarity. The training loss function can be formulated as:
L = α(− log exp(sim(vm,By)/τ)∑p i=1 exp(sim(vm,Bi)/τ) ) + (1− α)(− log exp(sim(xm,By)/τ)∑p i=1 exp(sim(xm,Bi)/τ) ), (9)
where xm ∈ {fm,am} is the final fused flow or audio feature, α is the weight for balancing two loss terms, sim(·, ·) is the cosine similarity, τ is a pre-defined temperature parameter. During training, we freeze the video encoder and the text encoder to retain their strong generalization to novel classes and save computation, while for other two modalities flow and audio, we fine-tune the encoder endto-end. An ablation study on fine-tuning different number of layers can be found in Tab. 6.
For inference on base classes, we compute the probability belonging to the j-th class by:
P (j) = exp(sim(vm,Bj)/τ)∑p i=1 exp(sim(vm,Bi)/τ) , j ∈ {1, 2, . . . , p}. (10)
3.4 GENERALIZATION TO NOVEL CLASSES
Similar to base classes, we obtain the text embeddings for novel classes as {Ni}qi=1, where q is the number of novel classes. In addition to fused features fm or am, we also incorporate the video feature v extracted from the frozen video encoder, followed by a temporal average pooling. Similar to Eq. 10, we compute the probability predictions as (here we only show flow modality for simplicity): Pf (j) = exp(sim(fm,Nj)/τf )∑q i=1 exp(sim(fm,Ni)/τf ) , Pv(j) = exp(sim(v,Nj)/(τv)∑q i=1 exp(sim(v,Ni)/(τv)) , j ∈ {1, 2, . . . , q}. (11) We denote the probability distribution followed by {pf (j)|qj=1} and {pv(j)| q j=1} as Df and Dv . In our experiments we find the curve of Dv tends to be much flatter (or have higher information entropy) than Df when the temperatures τv and τf are both set to the CLIP’s default value of 0.01, resulting in poor performance. We find simply lowering τv to 0.003 while keeping τf and τa as 0.01 solves this issue. A detailed ablation study about the temperature can be found in Appendix B.
The final probability predictions for novel classes are calculated by a weighted sum:
P (j) = βPf (j) + (1− β)Pv(j). (12)
4 EXPERIMENTS
4.1 DATASETS
We describe the details of dataset splits for benchmarking multimodal open-vocabulary video classification and preparing flow and audio modalities.
Kinetics-700 (Carreira et al., 2019) splits. Kinetics-700 contains around 650k video clips annotated with 700 human action classes. Apart from the visual appearance, motion plays an important role for distinguishing different action classes. For dataset split, we randomly select 400 classes as base classes and the testing videos of the rest 300 classes are used for novel class evaluation.
Kinetics-700 optical flow. We follow a standard procedure (Xie et al., 2018; Han et al., 2020a;b) to use the TV-L1 algorithm (Zach et al., 2007) to extract optical flow in an unsupervised manner. To accommodate for pre-trained vision encoders, we first truncate the vertical and horizontal motion values to [−20, 20], then append a third all-zero channel. Finally we do a shift and scale transformation to map [−20, 20] to [0, 255]. VGGSound (Chen et al., 2020) splits. VGGSound contains around 200k video clips belonging to a total number of 309 classes. Different from other audiovisual datasets like AudioSet (Gemmeke et al., 2017), VGGSound ensures the source of the sound is visually present inside the same video. Thus we consider this dataset as an excellent test bed for our proposed method. We randomly select 154 base classes for training and leave the rest 155 classes for novel classes evaluation.
VGGSound audio spectrogram. We use the pre-processing practice of audio spectrogram transformer (AST) (Gong et al., 2021) to convert wavforms to spectrogram images. Each raw audio signal is re-sampled to 16kHz and converted to mono channel. We then calculate the log Mel spectrogram with 128 frequency bins. The processing Hamming window is 25ms with a hop length set to 10ms. For t second audio input, the generated 2D spectrogram would have the shape of 128 × 100t. We normalize the spectrogram by subtracting its mean value and dividing its standard deviation.
4.2 IMPLEMENTATION
Data augmentation and tokenization. For each video, we first randomly sample 16 frames with a stride of 4 from the whole video sequence. We then apply the standard image augmentation used on ImageNet (He et al., 2016; 2019) with the same augmentation parameters across all frames to keep temporal consistency (Qian et al., 2021). For optical flow, we follow the practice of Xie et al. (2018) and Han et al. (2020a;b) by directly treating it as images and apply the same augmentation with the video. The augmented output tensors have the shape of (16, 224, 224, 3) from both modalities which can be directly fed into CLIP’s ViT encoder. For audio, we apply specialized augmentations designed for spectrogram following Gong et al. (2021) and Nagrani et al. (2021). As the videos in VGGSound are all 10-second long, the generated spectrogram has a shape of (128, 100 × 10). We first conduct random cropping of (128, 800), sampling all frequency bands with a time duration of 8 seconds. SpecAugment (Park et al., 2019) is applied subsequently with a time masking range of 192 frames and frequency masking of 48 bins. Finally, to accommodate this single channel output with the pre-trained tokenization layer, we make two necessary changes as in Gong et al. (2021): 1) expanding the spectrogram to three duplicated channels, 2) bilinearly interpolating the original positional encoding for spectrogram images with a different spatial resolution.
Network architecture. We adopt CLIP’s ViT encoder for video, flow, and audio and the transformer encoder for text. We use 2 transformer layers for temporal fusion (L = 2) and 1 transformer layer for cross-attention, each layer has an embedding dimension of 512 and 8 attention heads. For cross-attention, query and key-value inputs use separate layer normalization.
Training hyper-parameters. We adopt the same hyper-parameters for experiments on Kinetics700 and VGGSound except for training epochs. We use a batch size of 1024 on 128 Cloud TPUv3 cores, AdamW (Loshchilov & Hutter, 2017) optimizer with a weight decay of 0.05 and an initial learning rate of 1e-4 followed by half-cosine decay (He et al., 2019). We set the weight in Eq. 9 as α = 0.5. We train 100 epochs on Kinetics-700 and 20 epochs on VGGSound since we observe an overfit using audio modality when trained longer.
Inference hyper-parameters. For video and flow, we use 4×3 views following Arnab et al. (2021) and Liu et al. (2022), where a video is uniformly sampled into 4 clips temporally, and 3 spatial crops for each clip. For audio, we use 12 temporal views without spatial cropping. The final score is averaged over 12 views. For novel classes, we set the weight β in Eq. 12 to 0.25.
4.3 MULTIMODAL OPEN-VOCABULARY VIDEO CLASSIFICATION
We evaluate MOV on Kinetics-700 to utilize modalities of video, optical flow, and text, and on VGGSound to explore the combination of video, audio and text.
Comparison baselines. We evaluate four baselines: 1) CLIP (Radford et al., 2021), which directly encodes the video and class names into embeddings with pre-trained encoders. The final prediction is given by comparing similarity scores between video and text embeddings; 2) CoOp (Zhou et al., 2021), which learns continuous text prompt embeddings instead of manually selected templates for better adaptation to downstream tasks; 3) CLIP-Adapter (Gao et al., 2021), which attaches adapter heads to both video and text encoder; 4) VATT (Akbari et al., 2021), which is a state-of-the-art multimodal video pre-trainning method and can do zero-shot inference for video classification. We use the same datasets, backbone and hyper-parameters as ours introduced in Sec. 4.2 to train (CLIP and VATT do not require training) and evaluate all methods.
Results. Tab. 1 shows results on Kinetics-700. Both CoOp and CLIP-Adapter achieve better performance than CLIP on base class prediction. While for novel classes, we observe a large accuracy drop compared with CLIP. The degraded performance in harmonic mean of these two methods indicates their loss of the generalization ability on novel classes outweigh their improvement on base classes.
Our method outperforms CLIP-Adapter by 8.8% on base classes, demonstrating the effectiveness of leveraging multimodal information. On novel classes, we observe an improvement of 1.4% over CLIP, indicating that bringing in flow modality improves the generalization of the video model.
We observe similar trends on VGGSound in Tab. 2. CoOp and CLIP-Adapter improve base classes but fail to generalize to novel classes, resulting in a lower harmonic mean of accuracy compared with CLIP. MOV, when fused with rich audio information, obtains a performance gain of 2.7% over CLIP on novel classes. We also conduct an additional study of generalized open-vocabulary prediction in Appendix C, where the information of whether a class is from base or novel is not known.
Backbone scaling. It is also important to analyze the scalability of MOV with stronger backbones. We experiment with the largest ViT-L/14 model released by CLIP as the vision encoder and a text encoder with embedding dimension increased to 768 and attention heads increased to 12. ViT-L/14 contains 3× more parameters than ViT-B/16 and we observe around 8% improvement on direct CLIP zero-shot evaluation on Kinetics-700 and 5% improvement on VGGSound, as indicated in the first 2 rows of Tab. 3. MOV is able to preserve the performance gain brought by using stronger CLIP models (last 2 rows of Tab. 3). Despite the significantly stronger CLIP baseline, MOV still improves 20.5% and 1.6% on Kinetics-700, and 19.3% and 2.0% on VGGSound, when comparing row 2 and row 4 of Tab. 3. The scaling performance shows that MOV has a great potential to be incorporated into recent giant vision and language models (Yuan et al., 2021; Yu et al., 2022).
4.4 CROSS-DATASET TRANSFER
Pre-training an open-vocabulary or zero-shot video classification model on large datasets like Kinetics (Carreira et al., 2019), ImageNet (Deng et al., 2009) or Sports-1M (Karpathy et al., 2014) and evaluating on UCF101 (Soomro et al., 2012) and HMDB51 (Kuehne et al., 2011) is the most
GA (Mishra et al., 2018) C3D + W2V S1M 17.3±1.1 / - 19.3±2.1 / - TARN (Bishay et al., 2019) C3D + W2V S1M 19.0±2.3 / - 19.5±4.2 / - CWEGAN (Mandal et al., 2019) I3D + W2V IN, K400 26.9±2.8 / - 30.2±2.7 / - TS-GCN (Gao et al., 2019) GLNet + W2V IN-shuffle 34.2±3.1 / - 23.2±3.0 / - PS-GNN (Gao et al., 2020) GLNet + W2V IN-shuffle 36.1±4.8 / - 25.9±4.1 / - E2E (Brattoli et al., 2020) R(2+1)D + W2V K700 48.0 / 37.6 32.7 / 26.9 DASZL (Kim et al., 2021) TSM + Attributes IN, K400 48.9±5.8 / - - / - ER (Chen & Huang, 2021) TSM + BERT IN, K400 51.8±2.9 / - 35.3±4.6 / - ResT (Lin et al., 2022) RN101 + W2V K700 58.7±3.3 / 40.6 41.1±3.7 / 34.4 MIL-NCE (Miech et al., 2020) S3D + W2V HT100M - / 29.3 - / 10.4 VideoCLIP (Xu et al., 2021) S3D + TSF HT100M - / 22.5 - / 11.3 VATT (Akbari et al., 2021) ViT + TSF HT100M - / 18.4 - / 13.2 CLIP (Radford et al., 2021) ViT-B/16 + TSF WIT 79.9±3.8 / 73.0 54.0±4.1 / 46.1 ActionCLIP (Wang et al., 2021) ViT-B/16 + TSF WIT+ - / 69.5 - / 50.5 X-CLIP (Ni et al., 2022) ViT-B/16 + TSF WIT+ - / 72.0 - / 44.6 MOV (Ours) ViT-B/16 + TSF WIT+ 82.6±4.1 / 76.2 60.8±2.8 / 52.1 MOV (Ours) ViT-L/14 + TSF WIT+ 87.1±3.2 / 80.9 64.7±3.2 / 57.8 † vision encoder: C3D (Tran et al., 2015), I3D (Carreira & Zisserman, 2017), GLNet (Szegedy et al., 2015), R(2+1)D (Tran et al., 2018), TSM (Lin et al., 2019), RN101 (He et al., 2016), S3D (Xie et al., 2018), ViT (Dosovitskiy et al., 2021). ‡ text encoder: W2V (Mikolov et al., 2013), BERT (Devlin et al., 2019), TSF (Vaswani et al., 2017). § pre-train data: S1M (Karpathy et al., 2014), IN (Deng et al., 2009), K400 (Kay et al., 2017), IN-shuffle (Mettes et al., 2016), K700 (Carreira et al., 2019), HT100M (Radford et al., 2021), WIT (Radford et al., 2021), WIT+ has additional training on Kinetics.
common paradigm in the literature. Two settings are used for performance evaluation (Brattoli et al., 2020). The first is randomly choosing half of the classes in the test set and evaluate on the selected subset. To alleviate fluctuations caused by randomness, the evaluation is conducted independently for 10 times and we report the mean accuracy with standard deviation of all trials. We donate this setting as UCF∗ and HMDB∗ in Tab. 4. The second evaluation setting is directly evaluating on the whole dataset, which is suitable for methods pre-trained purely on other datasets (Brattoli et al., 2020; Wang et al., 2021; Lin et al., 2022). We train MOV only using 400 base classes subsampled from Kinectis-700, with video, flow and text. For evaluating on UCF and HMDB, we also use the same three modalities. The flow processing follows the same procedure described in Sec. 4.1.
We present a comprehensive comparison in Tab. 4. As in Lin et al. (2022), we list the vision and text encoder and pre-train data used. We compare with three types of state-of-the-art methods: 1) zeroshot video classification approaches (top part), 2) video and language pre-training methods (Miech et al., 2020; Xu et al., 2021; Akbari et al., 2021) (middle part), 3) CLIP adaptation methods (Wang et al., 2021; Ni et al., 2022) (bottom part). Compared to these methods, we find utilizing pretrained vision and language models like CLIP yield much stronger performance. MOV achieves performance gains over CLIP with around 3% on UCF101 and around 6% on HMDB51. Compared with recently proposed adaptation methods like ActionCLIP and X-CLIP, MOV performs 4.2% to 6.7% better on UCF101 and 1.6% to 7.5% better on HMDB51.
4.5 ABLATION STUDY
Multimodal fusion for base classes. As demonstrated in Fig. 1 and Fig. 2, the asymmetrical crossattention mechanism is proposed to improve the generalization to novel classes. Here we justify cross-attention also has the advantage for base classes. Tab. 5 shows, for Kinetics-700, simply using the optical flow as input obtains 54.2% on base classes. When using score fusion, compared with video modality, we observe identical performance on base classes. Equipped with the proposed multimodal cross-attention fusion mechanism, we obtain 2.6% improvement on base classes. For VGGSound, the performance of audio only is quite close to video only, and the score fusion facilitates base classes with a significant 6.5% improvement. Our cross-attention mechanism is able to further improve upon this strong baseline by 0.7%.
Fine-tuning. We fine-tune different layers of the encoder for flow and audio modality and show results in Tab. 6. As mentioned in Sec. 3, we use the same ViT-B/16 encoder and the same initialization weight for video, flow and audio. We iterate choices of fine-tuning the last 1, 3, 6, 9, and all 12 layers and find consistent performance gains with the increasing number of trainable layers on both modalities. Thus, we adopt the setting of fine-tuning all layers for flow and audio modality.
Per-class accuracy analysis. We analyze and interpret class-wise performance differences between MOV and CLIP baseline, which only uses video and text. As illustrated in Fig. 3a, we observe strong gains on classes that require motion understanding, e.g. yawning and long jump. While we also find decreased performance on classes with subtle or ambiguous motions, e.g. look in mirror and geocaching. In Fig. 3b, we observe audio modality can significantly help disambiguate classes sharing similar visual contents, e.g. people nose blowing and people laughing. For classes being difficult in the audio domain, e.g. sloshing water and wind noise, we observe decreased performances.
5 CONCLUSION
We propose a multimodal open-vocabulary video classification method named MOV via adopting pre-trained vision and language models. Motivated by observing drastic performance differences when using video, audio, and optical flow to generalize from base to novel classes, we design a novel asymmetrical cross-modal fusion mechanism to aggregate multimodal information. Extensive experiments on Kinetics, VGGSound, UCF, and HMDB benchmarks demonstrate the effectiveness of our method and the potential of scaling to giant vision and language models.
6 REPRODUCIBILITY STATEMENT
We plan to release our code, dataset splits, and models to facilitate reproducibility. We provided details of our model, data, implementation and experiments in Sec. 3, Sec. 4 and Appendix B. The CLIP model (Radford et al., 2021) and all datasets used in this work (Carreira et al., 2019; Chen et al., 2020; Soomro et al., 2012; Kuehne et al., 2011) are publicly available.
7 ETHICS STATEMENT
The proposed method shows better classification performance on multimodal videos with novel classes on Kinetics, VGGSound, UCF, and HMDB datasets, indicating its potential for real world applications. Our method is built upon vision and language models pre-trained on large-scale data from the internet, which may contain deficiencies and biases. Our models are used only for the purpose of evaluating research ideas. More rigorous studies for bias, fairness, etc., are required before using our models for any other purposes.
A COMPARISON WITH MODALITY-SPECIFIC PRE-TRAINED NETWORKS
As we haved metioned in the introduction, instead of using modality-specfic pre-trained encoder networks or methods (Wang et al., 2016; Hershey et al., 2017), we choose a more straightforward path by directly utilizing the pre-trained vision encoder from VLMs with minimal modifications to deal with optical flow and audio spectrogram. Here we list the experimental results using the audio of VGGSound in Tab. 7 to show the effectiveness of our design choice. All methods only use the audio training data and evaluate on audios. Our MOV based on CLIP’s vision encoder shows competitive performance compared to other audio specific encoders.
B TEMPERATURE TUNING
As described in Sec. 3.4, in addition to fused flow and audio features of {fm,am}, we also incorporate the video feature v extracted from the frozen video backbone to enhance the performance of generalization to novel classes. We denote the probability distribution followed by {pf (j)|qj=1}, {pa(j)|qj=1} and {pv(j)| q j=1} as Df , Da and Dv . In our experiments we find the curve of Dv tends to be much flatter (or have higher information entropy) than Df and Da when the temperatures τv , τf and τa are all set to the CLIP’s default value of 0.01. Neglecting this difference and combining the scores as in Eq. 12 would lead to poor performance. We address this problem by lowering τv so that the distribution of Dv would be more similar to Df and Da (or having similar information entropy). As shown in Tab. 8, adjusting τv to 0.003 while keeping τf and τa as 0.01 greatly improves the performance by 20.1% on Kinetics-700 and 15.8% on VGGSound.
C DISCUSSION ON GENERALIZED OPEN-VOCABULARY PREDICTION
Our model adopt different inference paths for base and novel classes. The evaluation setting of dividing classes into base and novel is a very common practice in existing open-vocabulary literature (Zhou et al., 2021; 2022; Gu et al., 2022; Ghiasi et al., 2021). We follow this established open-vocabulary setting to conduct experiments and evaluate our method.
If label category information isn’t given, evaluating purely on unseen classes is the classic setting of zero-shot evaluation (Xian et al., 2018). We benchmark our method in this zero-shot setting in Sec. 4.4 Cross-Dataset Transfer. Our method achieves state-of-the-art performance on commonly used UCF and HMDB zero-shot video classification benchmarks.
Here we consider another setting of generalized open-vocabulary prediction where we train our model on base classes but the model doesn’t know whether a class is from base or not during inference. A simple solution is to treat all classes as novel (i.e., use only the “Novel Class Prediction” path illustrated in Fig. 2). We conduct such experiment on Kinetics-700 by training MOV on 400 base classes and evaluating on all 700 classes by treating all of them as novel classes. In this scenario, we observe detrimental performances for both our method MOV and the CLIP baseline. Since the number of classes is 2× more (300 to 700), we consider it a reasonable result. MOV improves upon CLIP in both original (+1.4%) and generalized (+0.7%) open-vocabulary settings for predicting novel classes. | 1. What is the focus and contribution of the paper on multi-modal open-vocabulary video classification?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and comprehensive experiments?
3. What are the weaknesses of the paper, especially regarding the cross-modal attention scheme and its lack of thorough explanation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's findings, such as the performance difference between MOV and CLIP, or the generalizability of the proposed method to other video and language tasks? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a multi-modal open-vocabulary video classification method (MOV), by using a vision and language pre-trained (VLP) model. In the previous methods, severe performance degradation has been observed when the models are tested to novel classes. The paper uses multi-modal inputs and asymmetrical cross-modal fusion to aggregate multi-modal information.
Strengths And Weaknesses
Strength - Fig. 1 displays the motivation of the proposed method. The performance of open-vocabulary video classification on novel classes can be improved by considering various video modality in the VLP. The approach is the first attempt. The experimental results are comprehensive to demonstrate the effectiveness of the proposed method. Various datasets and models (with the backbone capability) are used. Cross-dataset validation has been conducted to show the generalization of the proposed method.
Weakness - such cross-modal attention has been used in several previous studies. The effectiveness of the asymmetric attention was demonstrated from experimental results, but there should be some reasonable discussion and reasons why such scheme can boost performance in the "novel" classes.
Clarity, Quality, Novelty And Reproducibility
In Section C of the supplementary results, MOV provides 46.7% on generalized 700-classes. The performance difference from CLIP is rather small (+0.7%). What are the reasons?
Are the proposed method can be generalized into other video and language tasks such as QA, captioning, and retrieval? If so, are they comparable to those in recent VLP studies?
In table 1 and 2, are the tested methods are trained using the same conditions (e.g. the same Kinetic 400s)? |
ICLR | Title
Pointer Sentinel Mixture Models
Abstract
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. We explore applying the pointer sentinel mixture model to the LSTM, a standard recurrent neural network building block. Utilizing an LSTM that achieves 80.6 perplexity on the Penn Treebank, the pointer sentinel-LSTM model pushes perplexity down to 70.9 while using far fewer parameters than an LSTM that achieves similar results. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and corpora we also introduce the freely available WikiText corpus.1
N/A
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. We explore applying the pointer sentinel mixture model to the LSTM, a standard recurrent neural network building block. Utilizing an LSTM that achieves 80.6 perplexity on the Penn Treebank, the pointer sentinel-LSTM model pushes perplexity down to 70.9 while using far fewer parameters than an LSTM that achieves similar results. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and corpora we also introduce the freely available WikiText corpus.1
1 INTRODUCTION
A major difficulty in language modeling is learning when to predict specific words from the immediate context. For instance, imagine a new person is introduced and two paragraphs later the context would allow one to very accurately predict this person’s name as the next word. For standard neural sequence models to predict this name, they would have to encode the name, store it for many time steps in their hidden state, and then decode it when appropriate. As the hidden state is limited in capacity and the optimization of such models suffer from the vanishing gradient problem, this is a lossy operation when performed over many timesteps. This is especially true for rare words.
Models with soft attention or memory components have been proposed to help deal with this challenge, aiming to allow for the retrieval and use of relevant previous hidden states, in effect increasing hidden state capacity and providing a path for gradients not tied to timesteps. Even with attention, the standard softmax classifier that is being used in these models often struggles to correctly predict rare or previously unknown words.
Pointer networks (Vinyals et al., 2015) provide one potential solution for rare and out of vocabulary (OoV) words as a pointer network uses attention to select an element from the input as output. This allows it to produce previously unseen input tokens. While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input.
We introduce a mixture model, illustrated in Fig. 1, that combines the advantages of standard softmax classifiers with those of a pointer component for effective and efficient language modeling. Rather than relying on the RNN hidden state to decide when to use the pointer, as in the recent work of Gülçehre et al. (2016), we allow the pointer component itself to decide when to use the softmax vocabulary through a sentinel. The model improves the state of the art perplexity on the Penn Treebank. Since this commonly used dataset is small and no other freely available alternative exists that allows for learning long range dependencies, we also introduce a new benchmark dataset for language modeling called WikiText.
1Available for download at the WikiText dataset site
2 THE POINTER SENTINEL FOR LANGUAGE MODELING
Given a sequence of words w1, . . . , wN−1, our task is to predict the next word wN .
2.1 THE SOFTMAX-RNN COMPONENT
Recurrent neural networks (RNNs) have seen widespread use for language modeling (Mikolov et al., 2010) due to their ability to, at least in theory, retain long term dependencies. RNNs employ the chain rule to factorize the joint probabilities over a sequence of tokens: p(w1, . . . , wN ) =∏N
i=1 p(wi|w1, . . . , wi−1). More precisely, at each time step i, we compute the RNN hidden state hi according to the previous hidden state hi−1 and the input xi such that hi = RNN(xi, hi−1). When all the N − 1 words have been processed by the RNN, the final state hN−1 is fed into a softmax layer which computes the probability over a vocabulary of possible words: pvocab(w) = softmax(UhN−1), where pvocab ∈ RV , U ∈ RV×H , H is the hidden size, and V the vocabulary size. RNNs can suffer from the vanishing gradient problem. The LSTM (Hochreiter & Schmidhuber, 1997) architecture has been proposed to deal with this by updating the hidden state according to a set of gates. Our work focuses on the LSTM but can be applied to any RNN architecture that ends in a vocabulary softmax.
2.2 THE POINTER NETWORK COMPONENT
In this section, we propose a modification to pointer networks for language modeling. To predict the next word in the sequence, a pointer network would select the member of the input sequence p(w1, . . . , wN−1) with the maximal attention score as the output.
The simplest way to compute an attention score for a specific hidden state is an inner product with all the past hidden states h, with each hidden state hi ∈ RH . However, if we want to compute such a score for the most recent word (since this word may be repeated), we need to include the last hidden state itself in this inner product. Taking the inner product of a vector with itself results in the vector’s magnitude squared, meaning the attention scores would be strongly biased towards the most recent word. Hence we project the current hidden state to a query vector q first. To produce the query q we compute q = tanh(WhN−1 + b), where W ∈ RH×H , b ∈ RH , and q ∈ RH . To generate the pointer attention scores, we compute the match between the previous RNN output states hi and the query q by taking the inner product, followed by a softmax activation function to obtain a probability distribution:
zi = q Thi, (1)
a = softmax(z), (2) where z ∈ RL, a ∈ RL, and L is the total number of hidden states. The probability mass assigned to a given word is the sum of the probability mass given to all token positions where the given word appears:
pptr(w) = ∑
i∈I(w,x) ai, (3)
where I(w, x) results in all positions of the word w in the input x and pptr ∈ RV . This technique, referred to as pointer sum attention, has been used for question answering (Kadlec et al., 2016).
Given the length of the documents used in language modeling, it may not be feasible for the pointer network to evaluate an attention score for all the words back to the beginning of the dataset. Instead, we may elect to maintain only a window of the L most recent words for the pointer to match against. The length L of the window is a hyperparameter that can be tuned on a held out dataset or by empirically analyzing how frequently a word at position t appears within the last L words.
To illustrate the advantages of this approach, consider a long article featuring two sentences President Obama discussed the economy and President Obama then flew to Prague. If the query was Which President is the article about?, probability mass could be applied to Obama in either sentence. If the question was instead Who flew to Prague?, only the latter occurrence of Obama provides the proper context. The attention sum model ensures that, as long as the entire attention probability mass is distributed on the occurrences of Obama, the pointer network can achieve zero loss. This flexibility provides supervision without forcing the model to put mass on supervision signals that may be incorrect or lack proper context.
2.3 THE POINTER SENTINEL MIXTURE MODEL
While pointer networks have proven to be effective, they cannot predict output words that are not present in the input, a common scenario in language modeling. We propose to resolve this by using a mixture model that combines a standard softmax with a pointer.
Our mixture model has two base distributions: the softmax vocabulary of the RNN output and the positional vocabulary of the pointer model. We refer to these as the RNN component and the pointer component respectively. To combine the two base distributions, we use a gating function g = p(zi = k|xi) where zi is the latent variable stating which base distribution the data point belongs to. As we only have two base distributions, g can produce a scalar in the range [0, 1]. A value of 0 implies that only the pointer is used and 1 means only the softmax-RNN is used.
p(yi|xi) = g pvocab(yi|xi) + (1− g) pptr(yi|xi). (4) While the models could be entirely separate, we re-use many of the parameters for the softmaxRNN and pointer components. This sharing minimizes the total number of parameters in the model and capitalizes on the pointer network’s supervision for the RNN component.
2.4 DETAILS OF THE GATING FUNCTION
To compute the new pointer sentinel gate g, we modify the pointer component. In particular, we add an additional element to z, the vector of attention scores as defined in Eq. 1. This element is
computed using an inner product between the query and the sentinel2 vector s ∈ RH . This change can be summarized by changing Eq. 2 to a = softmax ([ z; qT s ]) . We define a ∈ RV+1 to be the attention distribution over both the words in the pointer window as well as the sentinel state. We interpret the last element of this vector to be the gate value: g = a[V + 1].
Any probability mass assigned to g is given to the standard softmax vocabulary of the RNN. The final updated, normalized pointer probability over the vocabulary in the window then becomes:
pptr(yi|xi) = 1
1− g a[1 : V ], (5)
where we denoted [1 : V ] to mean the first V elements of the vector. The final mixture model is the same as Eq. 4 but with the updated Eq. 5 for the pointer probability.
This setup encourages the model to have both components compete: use pointers whenever possible and back-off to the standard softmax otherwise. By integrating the gating function into the pointer computation, it is influenced by both the RNN hidden state and the pointer window’s hidden states.
2.5 MOTIVATION FOR THE SENTINEL AS GATING FUNCTION
To make the best decision possible regarding which component to use the gating function must have as much context as possible. As we increase the window of words for the pointer component to consider, the RNN hidden state by itself isn’t guaranteed to accurately recall the identity or order of words it has recently seen (Adi et al., 2016). This is an obvious limitation of encoding a variable length sequence into a fixed dimensionality vector.
If we want a pointer window where the length L is in the hundreds, accurately modeling all of this information within the RNN hidden state is impractical. The position of specific words is also a vital feature as relevant words eventually fall out of the pointer component’s window. To correctly model this would require the RNN hidden state to store both the identity and position of each word in the pointer window. This is far beyond the capability of the fixed dimensionality RNN hidden state.
For this reason, we integrate the gating function directly into the pointer network by use of the sentinel. The decision to back-off to the softmax vocabulary is then informed by both the query q, generated using the RNN hidden state hN−1, and from the contents of the hidden states in the pointer window itself. This allows the model to accurately query what hidden states are contained in the pointer window and avoid maintaining state for words that may have fallen out of the window.
2.6 POINTER SENTINEL LOSS FUNCTION
We minimize the cross-entropy loss of −∑j ŷij log p(yij |xi), where ŷi is a one hot encoding of the correct output. During training, as ŷi is one hot, only a single mixed probability p(yij) must be computed for calculating the loss. This can result in a far more efficient GPU implementation. At prediction time, when we want all values for p(yi|xi), a maximum of L word probabilities must be mixed, as there is a maximum of L unique words in the pointer window of length L. This mixing can occur on the CPU where random access indexing is more efficient than the GPU.
Following the pointer sum attention network, the aim is to place probability mass from the attention mechanism on the correct output ŷi if it exists in the input. In the case of our mixture model the pointer loss instead becomes − log ( g + ∑ i∈I(y,x) ai ) , where I(y, x) results in all positions of the correct output y in the input x. The gate g may be assigned all probability mass if, for instance, the correct output ŷi exists only in the softmax-RNN vocabulary. There is no penalty if the model places the entire probability mass on any of the instances of the correct word in the input window. If the pointer component places the entirety of the probability mass on the gate g, the pointer network incurs no penalty and the loss is entirely determined by the loss of the softmax-RNN component.
2A sentinel value is inserted at the end of a search space in order to ensure a search algorithm terminates if no matching item is found. Our sentinel value terminates the pointer search space and distributes the rest of the probability mass to the RNN vocabulary.
2.7 PARAMETERS AND COMPUTATION TIME
The pointer sentinel-LSTM mixture model results in a relatively minor increase in parameters and computation time, especially when compared to the model size required to achieve similar performance using a standard LSTM. The only two additional parameters required by the model are those required for computing q, specifically W ∈ RH×H and b ∈ RH , and the sentinel vector embedding, s ∈ RH . This is independent of the depth of the RNN as the pointer component only interacts with the output of the final RNN layer. The additional H2 + 2H parameters are minor compared to a single LSTM layer’s 8H2 + 4H parameters. Most models also use multiple LSTM layers.
In terms of additional computation, a pointer sentinel-LSTM of window size L only requires computing the query q (a linear layer with tanh activation), a total of L parallelizable inner product calculations, and the attention scores for the L resulting scalars via the softmax function.
3 RELATED WORK
Considerable research has been dedicated to the task of language modeling, from traditional machine learning techniques such as n-grams to deep neural sequence models.
Mixture models composed of various knowledge sources have been proposed in the past for language modeling. Rosenfeld (1996) uses a maximum entropy model to combine a variety of information sources to improve language modeling on news text and speech. These information sources include complex overlapping n-gram distributions and n-gram caches that aim to capture rare words.
Beyond n-grams, neural sequence models such as recurrent neural networks have been shown to achieve state of the art results (Mikolov et al., 2010). A variety of RNN regularization methods have been explored, including a number of dropout variations (Zaremba et al., 2014; Gal, 2015) which prevent overfitting of complex LSTM language models. Other work has modified the RNN architecture to better handle increased recurrence depth (Zilly et al., 2016).
In order to increase capacity and minimize the impact of vanishing gradients, some language and translation models have also added a soft attention or memory component (Bahdanau et al., 2015; Sukhbaatar et al., 2015; Cheng et al., 2016; Kumar et al., 2016; Xiong et al., 2016; Ahn et al., 2016). These mechanisms allow for the retrieval and use of relevant previous hidden states. Soft attention mechanisms need to first encode the relevant word into a state vector and then decode it again, even if the output word is identical to the input word used to compute that hidden state or memory. A drawback to soft attention is that if, for instance, January and March are both equally attended candidates, the attention mechanism may blend the two vectors, resulting in a context vector closest to February (Kadlec et al., 2016). Even with attention, the standard softmax classifier being used in these models often struggles to correctly predict rare or previously unknown words.
Attention-based pointer mechanisms were introduced in Vinyals et al. (2015) where the pointer network is able to select elements from the input as output. In the above example, only January or March would be available as options, as February does not appear in the input. The use of pointer networks have been shown to help with geometric problems (Vinyals et al., 2015), code generation (Ling et al., 2016), summarization (Gu et al., 2016; Gülçehre et al., 2016), question answering (Kadlec et al., 2016). While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input.
Gülçehre et al. (2016) introduce a pointer softmax model that can generate output from either the vocabulary softmax of an RNN or the location softmax of the pointer network. Not only does this allow for producing OoV words which are not in the input, the pointer softmax model is able to better deal with rare and unknown words than a model only featuring an RNN softmax. Rather than constructing a mixture model as in our work, they use a switching network to decide which component to use. For neural machine translation, the switching network is conditioned on the representation of the context of the source text and the hidden state of the decoder. The pointer network is not used as a source of information for the switching network as in our model. The pointer and RNN softmax are scaled according to the switching network and the word or location with the highest final attention score is selected for output. Although this approach uses both a pointer and RNN component, it is not a mixture model and does not combine the probabilities for a word if it occurs in both the pointer location softmax and the RNN vocabulary softmax. In our model the word probability is a mix of both the RNN and pointer components, allowing for better predictions when the context may be ambiguous.
Extending this concept further, the latent predictor network (Ling et al., 2016) generates an output sequence conditioned on an arbitrary number of base models where each base model may have differing granularity. In their task of code generation, the output could be produced one character at a time using a standard softmax or instead copy entire words from referenced text fields using a pointer network. As opposed to Gülçehre et al. (2016), all states which produce the same output are merged by summing their probabilities. The model requires a complex training process involving the forward-backward algorithm for Semi-Markov models to prevent an exponential path explosion.
4 WIKITEXT - A BENCHMARK FOR LANGUAGE MODELING
We first describe the most commonly used language modeling dataset and its pre-processing in order to then motivate the need for a new benchmark dataset.
4.1 PENN TREEBANK
In order to compare our model to the many recent neural language models, we conduct word-level prediction experiments on the Penn Treebank (PTB) dataset (Marcus et al., 1993), pre-processed by Mikolov et al. (2010). The dataset consists of 929k training, 73k validation, and 82k test words. As part of the pre-processing performed by Mikolov et al. (2010), words were lower-cased, numbers were replaced with N, newlines were replaced with 〈eos〉, and all other punctuation was removed. The vocabulary is the most frequent 10k words with OoV tokens replaced by an 〈unk〉 token. For full statistics, refer to Table 1.
4.2 REASONS FOR A NEW DATASET
While the processed version of the PTB above has been frequently used for language modeling, it has many limitations. The tokens in PTB are all lower case, stripped of any punctuation, and limited to a vocabulary of only 10k words. These limitations mean that the PTB is unrealistic for real language use, especially when far larger vocabularies with many rare words are involved. The appendix contains a graph illustrating this using a Zipfian plot over the training partition of the PTB, with the curve stopping abruptly at the 10k limit. Given that accurately predicting rare words, such as named entities, is an important task for many applications, the lack of a long tail is problematic.
Other larger scale language modeling datasets exist. Unfortunately, they either have restrictive licensing which prevents widespread use or have randomized sentence ordering (Chelba et al., 2013) which is unrealistic for most language use and prevents the effective learning and evaluation of longer term dependencies. Hence, we constructed a language modeling dataset using text extracted from Wikipedia and have made this available to the community.
4.3 CONSTRUCTION AND PRE-PROCESSING
We selected articles only fitting the Good or Featured article criteria specified by editors on Wikipedia. These articles have been reviewed by humans and are considered well written, factually accurate, broad in coverage, neutral in point of view, and stable. This resulted in 23,805 Good
articles and 4,790 Featured articles. The text for each article was extracted using the Wikipedia API. Extracting text from Wikipedia mark-up is nontrivial due to the large number of macros in use, used for metric conversions, abbreviations, language notation, and date handling.
Once extracted, specific sections which primarily featured lists were removed by default. Other minor bugs, such as sort keys and Edit buttons that leaked in from the HTML, were also removed. Mathematical formulae and LATEX code were replaced with 〈formula〉 tokens. Normalization and tokenization were performed using the Moses tokenizer (Koehn et al., 2007), slightly augmented to further split numbers (8,600 → 8 @,@ 600) and with some additional minor fixes. Following Chelba et al. (2013) a vocabulary was constructed by discarding all words with a count below 3. Words outside of the vocabulary were mapped to the 〈unk〉 token, also a part of the vocabulary.
4.4 STATISTICS
The full WikiText dataset is over 103 million words in size, a hundred times larger than the PTB. It is also a tenth the size of the One Billion Word Benchmark (Chelba et al., 2013), one of the largest publicly available language modeling benchmarks, whilst consisting of articles that allow for the capture and usage of longer term dependencies as might be found in many real world tasks.
The dataset is available in two different sizes: WikiText-2 and WikiText-103. Both feature punctuation, original casing, a larger vocabulary, and numbers. WikiText-2 is two times the size of the Penn Treebank dataset. WikiText-103 features all extracted articles. Both datasets use the same articles for validation and testing, only differing in the vocabularies. For full statistics, refer to Table 1.
5 EXPERIMENTS
5.1 TRAINING DETAILS
As the pointer sentinel mixture model uses the outputs of the RNN from up to L timesteps back, this presents a challenge for training. If we do not regenerate the stale historical outputs of the RNN when we update the gradients, backpropagation through these stale outputs may result in incorrect gradient updates. If we do regenerate all stale outputs of the RNN, the training process is far slower. As we can make no theoretical guarantees on the impact of stale outputs on gradient updates, we opt to regenerate the window of RNN outputs used by the pointer component after each gradient update.
We also use truncated backpropagation through time (BPTT) in a different manner to many other RNN language models. Truncated BPTT allows for practical time-efficient training of RNN models but has fundamental trade-offs that are rarely discussed. For running truncated BPTT, BPTT is run for k2 timesteps once every k1 timesteps. For many RNN language modeling training schemes, k1 = k2, meaning that every k timesteps truncated BPTT is performed for the k previous timesteps. This results in only a single RNN output receiving backpropagation for k timesteps, with the other extreme being that the first token receives backpropagation for 0 timesteps. As such, most words in the training data will never experience a full backpropagation for k timesteps.
In our task, the pointer component always looks L timesteps into the past if L past timesteps are available. We select k1 = 1 and k2 = L such that for each timestep we perform backpropagation for L timesteps and advance one timestep at a time. Only the loss for the final predicted word is used for backpropagation through the window.
5.2 MODEL DETAILS
Our experimental setup reflects that of Zaremba et al. (2014) and Gal (2015). We increased the number of timesteps used during training from 35 to 100, matching the length of the window L. Batch size was increased to 32 from 20. We also halve the learning rate when validation perplexity is worse than the previous iteration, stopping training when validation perplexity fails to improve for three epochs or when 64 epochs are reached. The gradients are rescaled if their global norm exceeds 1 (Pascanu et al., 2013b).3 We evaluate the medium model configuration which features a two layer
3The highly aggressive clipping is likely due to the increased BPTT length. Even with such clipping early batches may experience excessively high perplexity, though this settles rapidly.
LSTM of hidden size 650. We compare against the large model configuration which features a two layer LSTM of hidden size 1500.
We produce results for two model types, an LSTM model that uses dropout regularization and the pointer sentinel-LSTM model. The variants of dropout used were zoneout (Krueger et al., 2016) and variational inference based dropout (Gal, 2015). Zoneout, which stochastically forces some recurrent units to maintain their previous values, was used for the recurrent connections within the LSTM. Variational inference based dropout, where the dropout mask for a layer is locked across timesteps, was used on the input to each RNN layer and also on the output of the final RNN layer. We used a value of 0.5 for both dropout connections.
5.3 COMPARISON OVER PENN TREEBANK
Table 2 compares the pointer sentinel-LSTM to a variety of other models on the Penn Treebank dataset. The pointer sentinel-LSTM achieves the lowest perplexity, followed by the recent Recurrent Highway Networks (Zilly et al., 2016). The medium pointer sentinel-LSTM model also achieves lower perplexity than the large LSTM models. Note that the best performing large variational LSTM model uses computationally intensive Monte Carlo (MC) dropout averaging. Monte Carlo dropout averaging is a general improvement for any sequence model that uses dropout but comes at a greatly increased test time cost. In Gal (2015) it requires rerunning the test model with 1000 different dropout masks. The pointer sentinel-LSTM is able to achieve these results with far fewer parameters than other models with comparable performance, specifically with less than a third the parameters used in the large variational LSTM models.
We also test a variational LSTM that uses zoneout, which serves as the RNN component of our pointer sentinel-LSTM mixture. This variational LSTM model performs BPTT for the same length L as the pointer sentinel-LSTM, where L = 100 timesteps. The results for this model ablation are worse than that of Gal (2015)’s variational LSTM without Monte Carlo dropout averaging.
5.4 COMPARISON OVER WIKITEXT-2
As WikiText-2 is being introduced in this dataset, there are no existing baselines. We provide two baselines to compare the pointer sentinel-LSTM against: our variational LSTM using zoneout and
the medium variational LSTM used in Gal (2015).4 Attempts to run the Gal (2015) large model variant, a two layer LSTM with hidden size 1500, resulted in out of memory errors on a 12GB K80 GPU, likely due to the increased vocabulary size. We chose the best hyperparameters from PTB experiments for all models. Table 3 shows a similar gain made by the pointer sentinel-LSTM over the variational LSTM models. The variational LSTM from Gal (2015) again beats out the variational LSTM used as a base for our experiments.
6 ANALYSIS
6.1 IMPACT ON RARE WORDS
A hypothesis as to why the pointer sentinel-LSTM can outperform an LSTM is that the pointer component allows the model to effectively reproduce rare words. The RNN may better use hidden state capacity by relying on the pointer component. The pointer component may also allow for a sharper selection of a single word than may be possible using only the softmax.
The appendix contains a graph which shows the improvement of perplexity when comparing the LSTM to the pointer sentinel-LSTM. Words are split across buckets according to frequency. As the words become rarer, the pointer sentinel-LSTM has stronger improvements in perplexity. Even on the Penn Treebank, where there is a relative absence of rare words due to only selecting the most frequent 10k words, we can see the pointer sentinel-LSTM mixture model provides a direct benefit.
While the improvements are largest on rare words, we can see the pointer sentinel-LSTM is still helpful on relatively frequent words. This may be the pointer component directly selecting the word or through the pointer supervision signal improving the RNN by allowing gradients to flow directly to other occurrences of the word in that window.
6.2 QUALITATIVE ANALYSIS OF POINTER USAGE
In a qualitative analysis, we visualized the gate use and pointer attention for a variety of examples in the validation set, focusing on predictions where the gate primarily used the pointer component. These visualizations are available in the appendix.
As expected, the pointer component is heavily used for rare names such as Seidman (23 times in training), Iverson (7 times in training), and Rosenthal (3 times in training). The pointer component was also heavily used when it came to other named entity names such as companies like Honeywell (8 times in training) and Integrated (41 times in training, though due to lowercasing of words this includes integrated circuits, fully integrated, and other generic usage). Surprisingly, the pointer component was also used for many frequent tokens. For selecting units of measurement (tons, kilograms, . . . ) or the short scale of numbers (thousands, millions, billions, . . . ), the pointer would refer to recent usage. This is to be expected, especially when phrases are of the form increased from N tons to N tons. The model can even be found relying on a mixture of the softmax and the pointer for predicting frequent verbs such as said.
Finally, the pointer component can be seen pointing to words at the very end of the 100 word window (position 97), a far longer horizon than the 35 steps that most language models truncate their backpropagation training to. This illustrates why the gating function must be integrated into the pointer component. If the gating function could only use the RNN hidden state, it would need to be wary of words that were near the tail of the pointer, especially if it was not able to accurately
4https://github.com/yaringal/BayesianRNN
track exactly how long it was since seeing a word. By integrating the gating function into the pointer component, we avoid the RNN hidden state having to maintain this intensive bookkeeping.
7 CONCLUSION
We introduced the pointer sentinel mixture model and the WikiText language modeling dataset. The pointer sentinel mixture model can be applied to any classifier that ends in a softmax, including various recurrent neural network building blocks. When applied to a standard LSTM, the pointer sentinel-LSTM achieves state of the art results in language modeling over the Penn Treebank while using few additional parameters and little additional computational complexity at prediction time.
We have also motivated the need to move from Penn Treebank to a new language modeling dataset for long range dependencies, providing WikiText-2 and WikiText-103 as potential options. We hope these new datasets can serve as a platform to improve handling of rare words and the usage of long term dependencies in language modeling. | 1. What is the main contribution of the paper, and how does it differ from previous works, particularly Gulcehre et al.'s approach?
2. How does the proposed model decide when to use the pointer component, and what is the role of the sentinel in this process?
3. What is the difference between the proposed mixture model and the model of Gulcehre et al., which also uses a mixture model?
4. What is the significance of the difference in the use of the pointer network between the proposed model and Gulcehre et al.'s model?
5. How does the proposed dataset compare to other language modeling datasets, such as enwik8 and Text8? | Review | Review
This paper proposes augmenting RNN-based language models with a pointer network in order to deal better with rare words. The pointer network can point to words in the recent context, and hence the prediction for each time step is a mixture between the usual softmax output and the pointer distribution over the recent words. The paper also introduces a new language modelling dataset, which overcomes some of the shortcomings of previous datasets.
The reason for the score I gave for this paper is that I find the proposed model a direct application of the previous work Gulcehre et al., which follows a similar approach but for machine translation and summarization. The main differences I find is that Gulcehre et al. use an encoder-decoder architecture, and use the attention weights of the encoder to point to locations of words in the input, while here an RNN is used and a pointer network produces a distribution over the full vocabulary (by summing the softmax probabilities of words in the recent context). The context (query) vector for the pointing network is also different, but this is also a direct consequence of having a different application.
While the paper describes the differences between the proposed approach and Gulcehre et al.’s approach, I find some of the claims either wrong or not that significant. For example, quoting from Section 1:
“Rather than relying on the RNN hidden state to decide when to use the pointer, as in the recent work of Gulcehre et al. (2016), we allow the pointer component itself to decide when to use the softmax vocabulary through a sentinel.”
As far as I can tell, your model also uses the recent hidden state to form a query vector, which is matched by the pointer network to previous words. Can you please clarify what you mean here?
In addition, quoting from section 3 which describes the model of Gulcehre et al.:
“Rather than constructing a mixture model as in our work, they use a switching network to decide which component to use”
This is not correct. The model of Gulcehre is also a mixture model, where an MLP with sigmoid output (switching network) is used to form a mixture between softmax prediction and locations of the input text.
Finally, in the following quote, also from section 3:
“The pointer network is not used as a source of information for the switching network as in our model.”
It is not clear what the authors mean by “source of information” here. Is it the fact that the switching probability is part of the pointer softmax? I am wondering how significant this difference is.
With regards to the proposed dataset, there are also other datasets typically used for language modelling, including The Hutter Prize Wikipedia (enwik8) dataset (Hutter, 2012) and e Text8 dataset (Mahoney, 2009). Can you please comment on the differences between your dataset and those as well?
I would be happy to discuss with the authors the points I raised, and I am open to changing my vote if there is any misunderstanding on my part. |
ICLR | Title
Pointer Sentinel Mixture Models
Abstract
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. We explore applying the pointer sentinel mixture model to the LSTM, a standard recurrent neural network building block. Utilizing an LSTM that achieves 80.6 perplexity on the Penn Treebank, the pointer sentinel-LSTM model pushes perplexity down to 70.9 while using far fewer parameters than an LSTM that achieves similar results. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and corpora we also introduce the freely available WikiText corpus.1
N/A
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. We explore applying the pointer sentinel mixture model to the LSTM, a standard recurrent neural network building block. Utilizing an LSTM that achieves 80.6 perplexity on the Penn Treebank, the pointer sentinel-LSTM model pushes perplexity down to 70.9 while using far fewer parameters than an LSTM that achieves similar results. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and corpora we also introduce the freely available WikiText corpus.1
1 INTRODUCTION
A major difficulty in language modeling is learning when to predict specific words from the immediate context. For instance, imagine a new person is introduced and two paragraphs later the context would allow one to very accurately predict this person’s name as the next word. For standard neural sequence models to predict this name, they would have to encode the name, store it for many time steps in their hidden state, and then decode it when appropriate. As the hidden state is limited in capacity and the optimization of such models suffer from the vanishing gradient problem, this is a lossy operation when performed over many timesteps. This is especially true for rare words.
Models with soft attention or memory components have been proposed to help deal with this challenge, aiming to allow for the retrieval and use of relevant previous hidden states, in effect increasing hidden state capacity and providing a path for gradients not tied to timesteps. Even with attention, the standard softmax classifier that is being used in these models often struggles to correctly predict rare or previously unknown words.
Pointer networks (Vinyals et al., 2015) provide one potential solution for rare and out of vocabulary (OoV) words as a pointer network uses attention to select an element from the input as output. This allows it to produce previously unseen input tokens. While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input.
We introduce a mixture model, illustrated in Fig. 1, that combines the advantages of standard softmax classifiers with those of a pointer component for effective and efficient language modeling. Rather than relying on the RNN hidden state to decide when to use the pointer, as in the recent work of Gülçehre et al. (2016), we allow the pointer component itself to decide when to use the softmax vocabulary through a sentinel. The model improves the state of the art perplexity on the Penn Treebank. Since this commonly used dataset is small and no other freely available alternative exists that allows for learning long range dependencies, we also introduce a new benchmark dataset for language modeling called WikiText.
1Available for download at the WikiText dataset site
2 THE POINTER SENTINEL FOR LANGUAGE MODELING
Given a sequence of words w1, . . . , wN−1, our task is to predict the next word wN .
2.1 THE SOFTMAX-RNN COMPONENT
Recurrent neural networks (RNNs) have seen widespread use for language modeling (Mikolov et al., 2010) due to their ability to, at least in theory, retain long term dependencies. RNNs employ the chain rule to factorize the joint probabilities over a sequence of tokens: p(w1, . . . , wN ) =∏N
i=1 p(wi|w1, . . . , wi−1). More precisely, at each time step i, we compute the RNN hidden state hi according to the previous hidden state hi−1 and the input xi such that hi = RNN(xi, hi−1). When all the N − 1 words have been processed by the RNN, the final state hN−1 is fed into a softmax layer which computes the probability over a vocabulary of possible words: pvocab(w) = softmax(UhN−1), where pvocab ∈ RV , U ∈ RV×H , H is the hidden size, and V the vocabulary size. RNNs can suffer from the vanishing gradient problem. The LSTM (Hochreiter & Schmidhuber, 1997) architecture has been proposed to deal with this by updating the hidden state according to a set of gates. Our work focuses on the LSTM but can be applied to any RNN architecture that ends in a vocabulary softmax.
2.2 THE POINTER NETWORK COMPONENT
In this section, we propose a modification to pointer networks for language modeling. To predict the next word in the sequence, a pointer network would select the member of the input sequence p(w1, . . . , wN−1) with the maximal attention score as the output.
The simplest way to compute an attention score for a specific hidden state is an inner product with all the past hidden states h, with each hidden state hi ∈ RH . However, if we want to compute such a score for the most recent word (since this word may be repeated), we need to include the last hidden state itself in this inner product. Taking the inner product of a vector with itself results in the vector’s magnitude squared, meaning the attention scores would be strongly biased towards the most recent word. Hence we project the current hidden state to a query vector q first. To produce the query q we compute q = tanh(WhN−1 + b), where W ∈ RH×H , b ∈ RH , and q ∈ RH . To generate the pointer attention scores, we compute the match between the previous RNN output states hi and the query q by taking the inner product, followed by a softmax activation function to obtain a probability distribution:
zi = q Thi, (1)
a = softmax(z), (2) where z ∈ RL, a ∈ RL, and L is the total number of hidden states. The probability mass assigned to a given word is the sum of the probability mass given to all token positions where the given word appears:
pptr(w) = ∑
i∈I(w,x) ai, (3)
where I(w, x) results in all positions of the word w in the input x and pptr ∈ RV . This technique, referred to as pointer sum attention, has been used for question answering (Kadlec et al., 2016).
Given the length of the documents used in language modeling, it may not be feasible for the pointer network to evaluate an attention score for all the words back to the beginning of the dataset. Instead, we may elect to maintain only a window of the L most recent words for the pointer to match against. The length L of the window is a hyperparameter that can be tuned on a held out dataset or by empirically analyzing how frequently a word at position t appears within the last L words.
To illustrate the advantages of this approach, consider a long article featuring two sentences President Obama discussed the economy and President Obama then flew to Prague. If the query was Which President is the article about?, probability mass could be applied to Obama in either sentence. If the question was instead Who flew to Prague?, only the latter occurrence of Obama provides the proper context. The attention sum model ensures that, as long as the entire attention probability mass is distributed on the occurrences of Obama, the pointer network can achieve zero loss. This flexibility provides supervision without forcing the model to put mass on supervision signals that may be incorrect or lack proper context.
2.3 THE POINTER SENTINEL MIXTURE MODEL
While pointer networks have proven to be effective, they cannot predict output words that are not present in the input, a common scenario in language modeling. We propose to resolve this by using a mixture model that combines a standard softmax with a pointer.
Our mixture model has two base distributions: the softmax vocabulary of the RNN output and the positional vocabulary of the pointer model. We refer to these as the RNN component and the pointer component respectively. To combine the two base distributions, we use a gating function g = p(zi = k|xi) where zi is the latent variable stating which base distribution the data point belongs to. As we only have two base distributions, g can produce a scalar in the range [0, 1]. A value of 0 implies that only the pointer is used and 1 means only the softmax-RNN is used.
p(yi|xi) = g pvocab(yi|xi) + (1− g) pptr(yi|xi). (4) While the models could be entirely separate, we re-use many of the parameters for the softmaxRNN and pointer components. This sharing minimizes the total number of parameters in the model and capitalizes on the pointer network’s supervision for the RNN component.
2.4 DETAILS OF THE GATING FUNCTION
To compute the new pointer sentinel gate g, we modify the pointer component. In particular, we add an additional element to z, the vector of attention scores as defined in Eq. 1. This element is
computed using an inner product between the query and the sentinel2 vector s ∈ RH . This change can be summarized by changing Eq. 2 to a = softmax ([ z; qT s ]) . We define a ∈ RV+1 to be the attention distribution over both the words in the pointer window as well as the sentinel state. We interpret the last element of this vector to be the gate value: g = a[V + 1].
Any probability mass assigned to g is given to the standard softmax vocabulary of the RNN. The final updated, normalized pointer probability over the vocabulary in the window then becomes:
pptr(yi|xi) = 1
1− g a[1 : V ], (5)
where we denoted [1 : V ] to mean the first V elements of the vector. The final mixture model is the same as Eq. 4 but with the updated Eq. 5 for the pointer probability.
This setup encourages the model to have both components compete: use pointers whenever possible and back-off to the standard softmax otherwise. By integrating the gating function into the pointer computation, it is influenced by both the RNN hidden state and the pointer window’s hidden states.
2.5 MOTIVATION FOR THE SENTINEL AS GATING FUNCTION
To make the best decision possible regarding which component to use the gating function must have as much context as possible. As we increase the window of words for the pointer component to consider, the RNN hidden state by itself isn’t guaranteed to accurately recall the identity or order of words it has recently seen (Adi et al., 2016). This is an obvious limitation of encoding a variable length sequence into a fixed dimensionality vector.
If we want a pointer window where the length L is in the hundreds, accurately modeling all of this information within the RNN hidden state is impractical. The position of specific words is also a vital feature as relevant words eventually fall out of the pointer component’s window. To correctly model this would require the RNN hidden state to store both the identity and position of each word in the pointer window. This is far beyond the capability of the fixed dimensionality RNN hidden state.
For this reason, we integrate the gating function directly into the pointer network by use of the sentinel. The decision to back-off to the softmax vocabulary is then informed by both the query q, generated using the RNN hidden state hN−1, and from the contents of the hidden states in the pointer window itself. This allows the model to accurately query what hidden states are contained in the pointer window and avoid maintaining state for words that may have fallen out of the window.
2.6 POINTER SENTINEL LOSS FUNCTION
We minimize the cross-entropy loss of −∑j ŷij log p(yij |xi), where ŷi is a one hot encoding of the correct output. During training, as ŷi is one hot, only a single mixed probability p(yij) must be computed for calculating the loss. This can result in a far more efficient GPU implementation. At prediction time, when we want all values for p(yi|xi), a maximum of L word probabilities must be mixed, as there is a maximum of L unique words in the pointer window of length L. This mixing can occur on the CPU where random access indexing is more efficient than the GPU.
Following the pointer sum attention network, the aim is to place probability mass from the attention mechanism on the correct output ŷi if it exists in the input. In the case of our mixture model the pointer loss instead becomes − log ( g + ∑ i∈I(y,x) ai ) , where I(y, x) results in all positions of the correct output y in the input x. The gate g may be assigned all probability mass if, for instance, the correct output ŷi exists only in the softmax-RNN vocabulary. There is no penalty if the model places the entire probability mass on any of the instances of the correct word in the input window. If the pointer component places the entirety of the probability mass on the gate g, the pointer network incurs no penalty and the loss is entirely determined by the loss of the softmax-RNN component.
2A sentinel value is inserted at the end of a search space in order to ensure a search algorithm terminates if no matching item is found. Our sentinel value terminates the pointer search space and distributes the rest of the probability mass to the RNN vocabulary.
2.7 PARAMETERS AND COMPUTATION TIME
The pointer sentinel-LSTM mixture model results in a relatively minor increase in parameters and computation time, especially when compared to the model size required to achieve similar performance using a standard LSTM. The only two additional parameters required by the model are those required for computing q, specifically W ∈ RH×H and b ∈ RH , and the sentinel vector embedding, s ∈ RH . This is independent of the depth of the RNN as the pointer component only interacts with the output of the final RNN layer. The additional H2 + 2H parameters are minor compared to a single LSTM layer’s 8H2 + 4H parameters. Most models also use multiple LSTM layers.
In terms of additional computation, a pointer sentinel-LSTM of window size L only requires computing the query q (a linear layer with tanh activation), a total of L parallelizable inner product calculations, and the attention scores for the L resulting scalars via the softmax function.
3 RELATED WORK
Considerable research has been dedicated to the task of language modeling, from traditional machine learning techniques such as n-grams to deep neural sequence models.
Mixture models composed of various knowledge sources have been proposed in the past for language modeling. Rosenfeld (1996) uses a maximum entropy model to combine a variety of information sources to improve language modeling on news text and speech. These information sources include complex overlapping n-gram distributions and n-gram caches that aim to capture rare words.
Beyond n-grams, neural sequence models such as recurrent neural networks have been shown to achieve state of the art results (Mikolov et al., 2010). A variety of RNN regularization methods have been explored, including a number of dropout variations (Zaremba et al., 2014; Gal, 2015) which prevent overfitting of complex LSTM language models. Other work has modified the RNN architecture to better handle increased recurrence depth (Zilly et al., 2016).
In order to increase capacity and minimize the impact of vanishing gradients, some language and translation models have also added a soft attention or memory component (Bahdanau et al., 2015; Sukhbaatar et al., 2015; Cheng et al., 2016; Kumar et al., 2016; Xiong et al., 2016; Ahn et al., 2016). These mechanisms allow for the retrieval and use of relevant previous hidden states. Soft attention mechanisms need to first encode the relevant word into a state vector and then decode it again, even if the output word is identical to the input word used to compute that hidden state or memory. A drawback to soft attention is that if, for instance, January and March are both equally attended candidates, the attention mechanism may blend the two vectors, resulting in a context vector closest to February (Kadlec et al., 2016). Even with attention, the standard softmax classifier being used in these models often struggles to correctly predict rare or previously unknown words.
Attention-based pointer mechanisms were introduced in Vinyals et al. (2015) where the pointer network is able to select elements from the input as output. In the above example, only January or March would be available as options, as February does not appear in the input. The use of pointer networks have been shown to help with geometric problems (Vinyals et al., 2015), code generation (Ling et al., 2016), summarization (Gu et al., 2016; Gülçehre et al., 2016), question answering (Kadlec et al., 2016). While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input.
Gülçehre et al. (2016) introduce a pointer softmax model that can generate output from either the vocabulary softmax of an RNN or the location softmax of the pointer network. Not only does this allow for producing OoV words which are not in the input, the pointer softmax model is able to better deal with rare and unknown words than a model only featuring an RNN softmax. Rather than constructing a mixture model as in our work, they use a switching network to decide which component to use. For neural machine translation, the switching network is conditioned on the representation of the context of the source text and the hidden state of the decoder. The pointer network is not used as a source of information for the switching network as in our model. The pointer and RNN softmax are scaled according to the switching network and the word or location with the highest final attention score is selected for output. Although this approach uses both a pointer and RNN component, it is not a mixture model and does not combine the probabilities for a word if it occurs in both the pointer location softmax and the RNN vocabulary softmax. In our model the word probability is a mix of both the RNN and pointer components, allowing for better predictions when the context may be ambiguous.
Extending this concept further, the latent predictor network (Ling et al., 2016) generates an output sequence conditioned on an arbitrary number of base models where each base model may have differing granularity. In their task of code generation, the output could be produced one character at a time using a standard softmax or instead copy entire words from referenced text fields using a pointer network. As opposed to Gülçehre et al. (2016), all states which produce the same output are merged by summing their probabilities. The model requires a complex training process involving the forward-backward algorithm for Semi-Markov models to prevent an exponential path explosion.
4 WIKITEXT - A BENCHMARK FOR LANGUAGE MODELING
We first describe the most commonly used language modeling dataset and its pre-processing in order to then motivate the need for a new benchmark dataset.
4.1 PENN TREEBANK
In order to compare our model to the many recent neural language models, we conduct word-level prediction experiments on the Penn Treebank (PTB) dataset (Marcus et al., 1993), pre-processed by Mikolov et al. (2010). The dataset consists of 929k training, 73k validation, and 82k test words. As part of the pre-processing performed by Mikolov et al. (2010), words were lower-cased, numbers were replaced with N, newlines were replaced with 〈eos〉, and all other punctuation was removed. The vocabulary is the most frequent 10k words with OoV tokens replaced by an 〈unk〉 token. For full statistics, refer to Table 1.
4.2 REASONS FOR A NEW DATASET
While the processed version of the PTB above has been frequently used for language modeling, it has many limitations. The tokens in PTB are all lower case, stripped of any punctuation, and limited to a vocabulary of only 10k words. These limitations mean that the PTB is unrealistic for real language use, especially when far larger vocabularies with many rare words are involved. The appendix contains a graph illustrating this using a Zipfian plot over the training partition of the PTB, with the curve stopping abruptly at the 10k limit. Given that accurately predicting rare words, such as named entities, is an important task for many applications, the lack of a long tail is problematic.
Other larger scale language modeling datasets exist. Unfortunately, they either have restrictive licensing which prevents widespread use or have randomized sentence ordering (Chelba et al., 2013) which is unrealistic for most language use and prevents the effective learning and evaluation of longer term dependencies. Hence, we constructed a language modeling dataset using text extracted from Wikipedia and have made this available to the community.
4.3 CONSTRUCTION AND PRE-PROCESSING
We selected articles only fitting the Good or Featured article criteria specified by editors on Wikipedia. These articles have been reviewed by humans and are considered well written, factually accurate, broad in coverage, neutral in point of view, and stable. This resulted in 23,805 Good
articles and 4,790 Featured articles. The text for each article was extracted using the Wikipedia API. Extracting text from Wikipedia mark-up is nontrivial due to the large number of macros in use, used for metric conversions, abbreviations, language notation, and date handling.
Once extracted, specific sections which primarily featured lists were removed by default. Other minor bugs, such as sort keys and Edit buttons that leaked in from the HTML, were also removed. Mathematical formulae and LATEX code were replaced with 〈formula〉 tokens. Normalization and tokenization were performed using the Moses tokenizer (Koehn et al., 2007), slightly augmented to further split numbers (8,600 → 8 @,@ 600) and with some additional minor fixes. Following Chelba et al. (2013) a vocabulary was constructed by discarding all words with a count below 3. Words outside of the vocabulary were mapped to the 〈unk〉 token, also a part of the vocabulary.
4.4 STATISTICS
The full WikiText dataset is over 103 million words in size, a hundred times larger than the PTB. It is also a tenth the size of the One Billion Word Benchmark (Chelba et al., 2013), one of the largest publicly available language modeling benchmarks, whilst consisting of articles that allow for the capture and usage of longer term dependencies as might be found in many real world tasks.
The dataset is available in two different sizes: WikiText-2 and WikiText-103. Both feature punctuation, original casing, a larger vocabulary, and numbers. WikiText-2 is two times the size of the Penn Treebank dataset. WikiText-103 features all extracted articles. Both datasets use the same articles for validation and testing, only differing in the vocabularies. For full statistics, refer to Table 1.
5 EXPERIMENTS
5.1 TRAINING DETAILS
As the pointer sentinel mixture model uses the outputs of the RNN from up to L timesteps back, this presents a challenge for training. If we do not regenerate the stale historical outputs of the RNN when we update the gradients, backpropagation through these stale outputs may result in incorrect gradient updates. If we do regenerate all stale outputs of the RNN, the training process is far slower. As we can make no theoretical guarantees on the impact of stale outputs on gradient updates, we opt to regenerate the window of RNN outputs used by the pointer component after each gradient update.
We also use truncated backpropagation through time (BPTT) in a different manner to many other RNN language models. Truncated BPTT allows for practical time-efficient training of RNN models but has fundamental trade-offs that are rarely discussed. For running truncated BPTT, BPTT is run for k2 timesteps once every k1 timesteps. For many RNN language modeling training schemes, k1 = k2, meaning that every k timesteps truncated BPTT is performed for the k previous timesteps. This results in only a single RNN output receiving backpropagation for k timesteps, with the other extreme being that the first token receives backpropagation for 0 timesteps. As such, most words in the training data will never experience a full backpropagation for k timesteps.
In our task, the pointer component always looks L timesteps into the past if L past timesteps are available. We select k1 = 1 and k2 = L such that for each timestep we perform backpropagation for L timesteps and advance one timestep at a time. Only the loss for the final predicted word is used for backpropagation through the window.
5.2 MODEL DETAILS
Our experimental setup reflects that of Zaremba et al. (2014) and Gal (2015). We increased the number of timesteps used during training from 35 to 100, matching the length of the window L. Batch size was increased to 32 from 20. We also halve the learning rate when validation perplexity is worse than the previous iteration, stopping training when validation perplexity fails to improve for three epochs or when 64 epochs are reached. The gradients are rescaled if their global norm exceeds 1 (Pascanu et al., 2013b).3 We evaluate the medium model configuration which features a two layer
3The highly aggressive clipping is likely due to the increased BPTT length. Even with such clipping early batches may experience excessively high perplexity, though this settles rapidly.
LSTM of hidden size 650. We compare against the large model configuration which features a two layer LSTM of hidden size 1500.
We produce results for two model types, an LSTM model that uses dropout regularization and the pointer sentinel-LSTM model. The variants of dropout used were zoneout (Krueger et al., 2016) and variational inference based dropout (Gal, 2015). Zoneout, which stochastically forces some recurrent units to maintain their previous values, was used for the recurrent connections within the LSTM. Variational inference based dropout, where the dropout mask for a layer is locked across timesteps, was used on the input to each RNN layer and also on the output of the final RNN layer. We used a value of 0.5 for both dropout connections.
5.3 COMPARISON OVER PENN TREEBANK
Table 2 compares the pointer sentinel-LSTM to a variety of other models on the Penn Treebank dataset. The pointer sentinel-LSTM achieves the lowest perplexity, followed by the recent Recurrent Highway Networks (Zilly et al., 2016). The medium pointer sentinel-LSTM model also achieves lower perplexity than the large LSTM models. Note that the best performing large variational LSTM model uses computationally intensive Monte Carlo (MC) dropout averaging. Monte Carlo dropout averaging is a general improvement for any sequence model that uses dropout but comes at a greatly increased test time cost. In Gal (2015) it requires rerunning the test model with 1000 different dropout masks. The pointer sentinel-LSTM is able to achieve these results with far fewer parameters than other models with comparable performance, specifically with less than a third the parameters used in the large variational LSTM models.
We also test a variational LSTM that uses zoneout, which serves as the RNN component of our pointer sentinel-LSTM mixture. This variational LSTM model performs BPTT for the same length L as the pointer sentinel-LSTM, where L = 100 timesteps. The results for this model ablation are worse than that of Gal (2015)’s variational LSTM without Monte Carlo dropout averaging.
5.4 COMPARISON OVER WIKITEXT-2
As WikiText-2 is being introduced in this dataset, there are no existing baselines. We provide two baselines to compare the pointer sentinel-LSTM against: our variational LSTM using zoneout and
the medium variational LSTM used in Gal (2015).4 Attempts to run the Gal (2015) large model variant, a two layer LSTM with hidden size 1500, resulted in out of memory errors on a 12GB K80 GPU, likely due to the increased vocabulary size. We chose the best hyperparameters from PTB experiments for all models. Table 3 shows a similar gain made by the pointer sentinel-LSTM over the variational LSTM models. The variational LSTM from Gal (2015) again beats out the variational LSTM used as a base for our experiments.
6 ANALYSIS
6.1 IMPACT ON RARE WORDS
A hypothesis as to why the pointer sentinel-LSTM can outperform an LSTM is that the pointer component allows the model to effectively reproduce rare words. The RNN may better use hidden state capacity by relying on the pointer component. The pointer component may also allow for a sharper selection of a single word than may be possible using only the softmax.
The appendix contains a graph which shows the improvement of perplexity when comparing the LSTM to the pointer sentinel-LSTM. Words are split across buckets according to frequency. As the words become rarer, the pointer sentinel-LSTM has stronger improvements in perplexity. Even on the Penn Treebank, where there is a relative absence of rare words due to only selecting the most frequent 10k words, we can see the pointer sentinel-LSTM mixture model provides a direct benefit.
While the improvements are largest on rare words, we can see the pointer sentinel-LSTM is still helpful on relatively frequent words. This may be the pointer component directly selecting the word or through the pointer supervision signal improving the RNN by allowing gradients to flow directly to other occurrences of the word in that window.
6.2 QUALITATIVE ANALYSIS OF POINTER USAGE
In a qualitative analysis, we visualized the gate use and pointer attention for a variety of examples in the validation set, focusing on predictions where the gate primarily used the pointer component. These visualizations are available in the appendix.
As expected, the pointer component is heavily used for rare names such as Seidman (23 times in training), Iverson (7 times in training), and Rosenthal (3 times in training). The pointer component was also heavily used when it came to other named entity names such as companies like Honeywell (8 times in training) and Integrated (41 times in training, though due to lowercasing of words this includes integrated circuits, fully integrated, and other generic usage). Surprisingly, the pointer component was also used for many frequent tokens. For selecting units of measurement (tons, kilograms, . . . ) or the short scale of numbers (thousands, millions, billions, . . . ), the pointer would refer to recent usage. This is to be expected, especially when phrases are of the form increased from N tons to N tons. The model can even be found relying on a mixture of the softmax and the pointer for predicting frequent verbs such as said.
Finally, the pointer component can be seen pointing to words at the very end of the 100 word window (position 97), a far longer horizon than the 35 steps that most language models truncate their backpropagation training to. This illustrates why the gating function must be integrated into the pointer component. If the gating function could only use the RNN hidden state, it would need to be wary of words that were near the tail of the pointer, especially if it was not able to accurately
4https://github.com/yaringal/BayesianRNN
track exactly how long it was since seeing a word. By integrating the gating function into the pointer component, we avoid the RNN hidden state having to maintain this intensive bookkeeping.
7 CONCLUSION
We introduced the pointer sentinel mixture model and the WikiText language modeling dataset. The pointer sentinel mixture model can be applied to any classifier that ends in a softmax, including various recurrent neural network building blocks. When applied to a standard LSTM, the pointer sentinel-LSTM achieves state of the art results in language modeling over the Penn Treebank while using few additional parameters and little additional computational complexity at prediction time.
We have also motivated the need to move from Penn Treebank to a new language modeling dataset for long range dependencies, providing WikiText-2 and WikiText-103 as potential options. We hope these new datasets can serve as a platform to improve handling of rare words and the usage of long term dependencies in language modeling. | 1. What is the main contribution of the paper in language modeling?
2. How does the proposed approach differ from traditional pointer networks in language modeling?
3. What are the advantages of using a sentinel vector in the proposed method?
4. Are there any variations of the sentinel mixture implementation that could be explored further?
5. What is the significance of the new WikiText language modeling dataset introduced in the paper? | Review | Review
This work is basically a combined pointer network applied on language modelling.
The smart point is that this paper aims at language modelling with longer context, where a memory of seen words (especially the rare words) would be very useful for predicting the rest of the sentences.
Hence, a combination of a pointer network and a standard language model would balance the copying seen words and predicting unseen words.
Generally, such as the combined pointer networks applied in sentence compression, a vector representation of the source sequence would be used to compute the gate.
This paper, instead, introduces a sentinel vector to carry out the mixture model, which is suitable in the case of language modelling.
I would be interested in the variations of sentinel mixture implementation, though the current version has achieved very good results.
In addition, the new WikiText language modelling dataset is very interesting.
It probably can be a more standard dataset for evaluating the continuously-updated language model benchmarks than ptb dataset.
Overall, this is a well-written paper. I recommend it to be accepted. |
ICLR | Title
Pointer Sentinel Mixture Models
Abstract
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. We explore applying the pointer sentinel mixture model to the LSTM, a standard recurrent neural network building block. Utilizing an LSTM that achieves 80.6 perplexity on the Penn Treebank, the pointer sentinel-LSTM model pushes perplexity down to 70.9 while using far fewer parameters than an LSTM that achieves similar results. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and corpora we also introduce the freely available WikiText corpus.1
N/A
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the context makes the prediction unambiguous. We introduce the pointer sentinel mixture architecture for neural sequence models which has the ability to either reproduce a word from the recent context or produce a word from a standard softmax classifier. We explore applying the pointer sentinel mixture model to the LSTM, a standard recurrent neural network building block. Utilizing an LSTM that achieves 80.6 perplexity on the Penn Treebank, the pointer sentinel-LSTM model pushes perplexity down to 70.9 while using far fewer parameters than an LSTM that achieves similar results. In order to evaluate how well language models can exploit longer contexts and deal with more realistic vocabularies and corpora we also introduce the freely available WikiText corpus.1
1 INTRODUCTION
A major difficulty in language modeling is learning when to predict specific words from the immediate context. For instance, imagine a new person is introduced and two paragraphs later the context would allow one to very accurately predict this person’s name as the next word. For standard neural sequence models to predict this name, they would have to encode the name, store it for many time steps in their hidden state, and then decode it when appropriate. As the hidden state is limited in capacity and the optimization of such models suffer from the vanishing gradient problem, this is a lossy operation when performed over many timesteps. This is especially true for rare words.
Models with soft attention or memory components have been proposed to help deal with this challenge, aiming to allow for the retrieval and use of relevant previous hidden states, in effect increasing hidden state capacity and providing a path for gradients not tied to timesteps. Even with attention, the standard softmax classifier that is being used in these models often struggles to correctly predict rare or previously unknown words.
Pointer networks (Vinyals et al., 2015) provide one potential solution for rare and out of vocabulary (OoV) words as a pointer network uses attention to select an element from the input as output. This allows it to produce previously unseen input tokens. While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input.
We introduce a mixture model, illustrated in Fig. 1, that combines the advantages of standard softmax classifiers with those of a pointer component for effective and efficient language modeling. Rather than relying on the RNN hidden state to decide when to use the pointer, as in the recent work of Gülçehre et al. (2016), we allow the pointer component itself to decide when to use the softmax vocabulary through a sentinel. The model improves the state of the art perplexity on the Penn Treebank. Since this commonly used dataset is small and no other freely available alternative exists that allows for learning long range dependencies, we also introduce a new benchmark dataset for language modeling called WikiText.
1Available for download at the WikiText dataset site
2 THE POINTER SENTINEL FOR LANGUAGE MODELING
Given a sequence of words w1, . . . , wN−1, our task is to predict the next word wN .
2.1 THE SOFTMAX-RNN COMPONENT
Recurrent neural networks (RNNs) have seen widespread use for language modeling (Mikolov et al., 2010) due to their ability to, at least in theory, retain long term dependencies. RNNs employ the chain rule to factorize the joint probabilities over a sequence of tokens: p(w1, . . . , wN ) =∏N
i=1 p(wi|w1, . . . , wi−1). More precisely, at each time step i, we compute the RNN hidden state hi according to the previous hidden state hi−1 and the input xi such that hi = RNN(xi, hi−1). When all the N − 1 words have been processed by the RNN, the final state hN−1 is fed into a softmax layer which computes the probability over a vocabulary of possible words: pvocab(w) = softmax(UhN−1), where pvocab ∈ RV , U ∈ RV×H , H is the hidden size, and V the vocabulary size. RNNs can suffer from the vanishing gradient problem. The LSTM (Hochreiter & Schmidhuber, 1997) architecture has been proposed to deal with this by updating the hidden state according to a set of gates. Our work focuses on the LSTM but can be applied to any RNN architecture that ends in a vocabulary softmax.
2.2 THE POINTER NETWORK COMPONENT
In this section, we propose a modification to pointer networks for language modeling. To predict the next word in the sequence, a pointer network would select the member of the input sequence p(w1, . . . , wN−1) with the maximal attention score as the output.
The simplest way to compute an attention score for a specific hidden state is an inner product with all the past hidden states h, with each hidden state hi ∈ RH . However, if we want to compute such a score for the most recent word (since this word may be repeated), we need to include the last hidden state itself in this inner product. Taking the inner product of a vector with itself results in the vector’s magnitude squared, meaning the attention scores would be strongly biased towards the most recent word. Hence we project the current hidden state to a query vector q first. To produce the query q we compute q = tanh(WhN−1 + b), where W ∈ RH×H , b ∈ RH , and q ∈ RH . To generate the pointer attention scores, we compute the match between the previous RNN output states hi and the query q by taking the inner product, followed by a softmax activation function to obtain a probability distribution:
zi = q Thi, (1)
a = softmax(z), (2) where z ∈ RL, a ∈ RL, and L is the total number of hidden states. The probability mass assigned to a given word is the sum of the probability mass given to all token positions where the given word appears:
pptr(w) = ∑
i∈I(w,x) ai, (3)
where I(w, x) results in all positions of the word w in the input x and pptr ∈ RV . This technique, referred to as pointer sum attention, has been used for question answering (Kadlec et al., 2016).
Given the length of the documents used in language modeling, it may not be feasible for the pointer network to evaluate an attention score for all the words back to the beginning of the dataset. Instead, we may elect to maintain only a window of the L most recent words for the pointer to match against. The length L of the window is a hyperparameter that can be tuned on a held out dataset or by empirically analyzing how frequently a word at position t appears within the last L words.
To illustrate the advantages of this approach, consider a long article featuring two sentences President Obama discussed the economy and President Obama then flew to Prague. If the query was Which President is the article about?, probability mass could be applied to Obama in either sentence. If the question was instead Who flew to Prague?, only the latter occurrence of Obama provides the proper context. The attention sum model ensures that, as long as the entire attention probability mass is distributed on the occurrences of Obama, the pointer network can achieve zero loss. This flexibility provides supervision without forcing the model to put mass on supervision signals that may be incorrect or lack proper context.
2.3 THE POINTER SENTINEL MIXTURE MODEL
While pointer networks have proven to be effective, they cannot predict output words that are not present in the input, a common scenario in language modeling. We propose to resolve this by using a mixture model that combines a standard softmax with a pointer.
Our mixture model has two base distributions: the softmax vocabulary of the RNN output and the positional vocabulary of the pointer model. We refer to these as the RNN component and the pointer component respectively. To combine the two base distributions, we use a gating function g = p(zi = k|xi) where zi is the latent variable stating which base distribution the data point belongs to. As we only have two base distributions, g can produce a scalar in the range [0, 1]. A value of 0 implies that only the pointer is used and 1 means only the softmax-RNN is used.
p(yi|xi) = g pvocab(yi|xi) + (1− g) pptr(yi|xi). (4) While the models could be entirely separate, we re-use many of the parameters for the softmaxRNN and pointer components. This sharing minimizes the total number of parameters in the model and capitalizes on the pointer network’s supervision for the RNN component.
2.4 DETAILS OF THE GATING FUNCTION
To compute the new pointer sentinel gate g, we modify the pointer component. In particular, we add an additional element to z, the vector of attention scores as defined in Eq. 1. This element is
computed using an inner product between the query and the sentinel2 vector s ∈ RH . This change can be summarized by changing Eq. 2 to a = softmax ([ z; qT s ]) . We define a ∈ RV+1 to be the attention distribution over both the words in the pointer window as well as the sentinel state. We interpret the last element of this vector to be the gate value: g = a[V + 1].
Any probability mass assigned to g is given to the standard softmax vocabulary of the RNN. The final updated, normalized pointer probability over the vocabulary in the window then becomes:
pptr(yi|xi) = 1
1− g a[1 : V ], (5)
where we denoted [1 : V ] to mean the first V elements of the vector. The final mixture model is the same as Eq. 4 but with the updated Eq. 5 for the pointer probability.
This setup encourages the model to have both components compete: use pointers whenever possible and back-off to the standard softmax otherwise. By integrating the gating function into the pointer computation, it is influenced by both the RNN hidden state and the pointer window’s hidden states.
2.5 MOTIVATION FOR THE SENTINEL AS GATING FUNCTION
To make the best decision possible regarding which component to use the gating function must have as much context as possible. As we increase the window of words for the pointer component to consider, the RNN hidden state by itself isn’t guaranteed to accurately recall the identity or order of words it has recently seen (Adi et al., 2016). This is an obvious limitation of encoding a variable length sequence into a fixed dimensionality vector.
If we want a pointer window where the length L is in the hundreds, accurately modeling all of this information within the RNN hidden state is impractical. The position of specific words is also a vital feature as relevant words eventually fall out of the pointer component’s window. To correctly model this would require the RNN hidden state to store both the identity and position of each word in the pointer window. This is far beyond the capability of the fixed dimensionality RNN hidden state.
For this reason, we integrate the gating function directly into the pointer network by use of the sentinel. The decision to back-off to the softmax vocabulary is then informed by both the query q, generated using the RNN hidden state hN−1, and from the contents of the hidden states in the pointer window itself. This allows the model to accurately query what hidden states are contained in the pointer window and avoid maintaining state for words that may have fallen out of the window.
2.6 POINTER SENTINEL LOSS FUNCTION
We minimize the cross-entropy loss of −∑j ŷij log p(yij |xi), where ŷi is a one hot encoding of the correct output. During training, as ŷi is one hot, only a single mixed probability p(yij) must be computed for calculating the loss. This can result in a far more efficient GPU implementation. At prediction time, when we want all values for p(yi|xi), a maximum of L word probabilities must be mixed, as there is a maximum of L unique words in the pointer window of length L. This mixing can occur on the CPU where random access indexing is more efficient than the GPU.
Following the pointer sum attention network, the aim is to place probability mass from the attention mechanism on the correct output ŷi if it exists in the input. In the case of our mixture model the pointer loss instead becomes − log ( g + ∑ i∈I(y,x) ai ) , where I(y, x) results in all positions of the correct output y in the input x. The gate g may be assigned all probability mass if, for instance, the correct output ŷi exists only in the softmax-RNN vocabulary. There is no penalty if the model places the entire probability mass on any of the instances of the correct word in the input window. If the pointer component places the entirety of the probability mass on the gate g, the pointer network incurs no penalty and the loss is entirely determined by the loss of the softmax-RNN component.
2A sentinel value is inserted at the end of a search space in order to ensure a search algorithm terminates if no matching item is found. Our sentinel value terminates the pointer search space and distributes the rest of the probability mass to the RNN vocabulary.
2.7 PARAMETERS AND COMPUTATION TIME
The pointer sentinel-LSTM mixture model results in a relatively minor increase in parameters and computation time, especially when compared to the model size required to achieve similar performance using a standard LSTM. The only two additional parameters required by the model are those required for computing q, specifically W ∈ RH×H and b ∈ RH , and the sentinel vector embedding, s ∈ RH . This is independent of the depth of the RNN as the pointer component only interacts with the output of the final RNN layer. The additional H2 + 2H parameters are minor compared to a single LSTM layer’s 8H2 + 4H parameters. Most models also use multiple LSTM layers.
In terms of additional computation, a pointer sentinel-LSTM of window size L only requires computing the query q (a linear layer with tanh activation), a total of L parallelizable inner product calculations, and the attention scores for the L resulting scalars via the softmax function.
3 RELATED WORK
Considerable research has been dedicated to the task of language modeling, from traditional machine learning techniques such as n-grams to deep neural sequence models.
Mixture models composed of various knowledge sources have been proposed in the past for language modeling. Rosenfeld (1996) uses a maximum entropy model to combine a variety of information sources to improve language modeling on news text and speech. These information sources include complex overlapping n-gram distributions and n-gram caches that aim to capture rare words.
Beyond n-grams, neural sequence models such as recurrent neural networks have been shown to achieve state of the art results (Mikolov et al., 2010). A variety of RNN regularization methods have been explored, including a number of dropout variations (Zaremba et al., 2014; Gal, 2015) which prevent overfitting of complex LSTM language models. Other work has modified the RNN architecture to better handle increased recurrence depth (Zilly et al., 2016).
In order to increase capacity and minimize the impact of vanishing gradients, some language and translation models have also added a soft attention or memory component (Bahdanau et al., 2015; Sukhbaatar et al., 2015; Cheng et al., 2016; Kumar et al., 2016; Xiong et al., 2016; Ahn et al., 2016). These mechanisms allow for the retrieval and use of relevant previous hidden states. Soft attention mechanisms need to first encode the relevant word into a state vector and then decode it again, even if the output word is identical to the input word used to compute that hidden state or memory. A drawback to soft attention is that if, for instance, January and March are both equally attended candidates, the attention mechanism may blend the two vectors, resulting in a context vector closest to February (Kadlec et al., 2016). Even with attention, the standard softmax classifier being used in these models often struggles to correctly predict rare or previously unknown words.
Attention-based pointer mechanisms were introduced in Vinyals et al. (2015) where the pointer network is able to select elements from the input as output. In the above example, only January or March would be available as options, as February does not appear in the input. The use of pointer networks have been shown to help with geometric problems (Vinyals et al., 2015), code generation (Ling et al., 2016), summarization (Gu et al., 2016; Gülçehre et al., 2016), question answering (Kadlec et al., 2016). While pointer networks improve performance on rare words and long-term dependencies they are unable to select words that do not exist in the input.
Gülçehre et al. (2016) introduce a pointer softmax model that can generate output from either the vocabulary softmax of an RNN or the location softmax of the pointer network. Not only does this allow for producing OoV words which are not in the input, the pointer softmax model is able to better deal with rare and unknown words than a model only featuring an RNN softmax. Rather than constructing a mixture model as in our work, they use a switching network to decide which component to use. For neural machine translation, the switching network is conditioned on the representation of the context of the source text and the hidden state of the decoder. The pointer network is not used as a source of information for the switching network as in our model. The pointer and RNN softmax are scaled according to the switching network and the word or location with the highest final attention score is selected for output. Although this approach uses both a pointer and RNN component, it is not a mixture model and does not combine the probabilities for a word if it occurs in both the pointer location softmax and the RNN vocabulary softmax. In our model the word probability is a mix of both the RNN and pointer components, allowing for better predictions when the context may be ambiguous.
Extending this concept further, the latent predictor network (Ling et al., 2016) generates an output sequence conditioned on an arbitrary number of base models where each base model may have differing granularity. In their task of code generation, the output could be produced one character at a time using a standard softmax or instead copy entire words from referenced text fields using a pointer network. As opposed to Gülçehre et al. (2016), all states which produce the same output are merged by summing their probabilities. The model requires a complex training process involving the forward-backward algorithm for Semi-Markov models to prevent an exponential path explosion.
4 WIKITEXT - A BENCHMARK FOR LANGUAGE MODELING
We first describe the most commonly used language modeling dataset and its pre-processing in order to then motivate the need for a new benchmark dataset.
4.1 PENN TREEBANK
In order to compare our model to the many recent neural language models, we conduct word-level prediction experiments on the Penn Treebank (PTB) dataset (Marcus et al., 1993), pre-processed by Mikolov et al. (2010). The dataset consists of 929k training, 73k validation, and 82k test words. As part of the pre-processing performed by Mikolov et al. (2010), words were lower-cased, numbers were replaced with N, newlines were replaced with 〈eos〉, and all other punctuation was removed. The vocabulary is the most frequent 10k words with OoV tokens replaced by an 〈unk〉 token. For full statistics, refer to Table 1.
4.2 REASONS FOR A NEW DATASET
While the processed version of the PTB above has been frequently used for language modeling, it has many limitations. The tokens in PTB are all lower case, stripped of any punctuation, and limited to a vocabulary of only 10k words. These limitations mean that the PTB is unrealistic for real language use, especially when far larger vocabularies with many rare words are involved. The appendix contains a graph illustrating this using a Zipfian plot over the training partition of the PTB, with the curve stopping abruptly at the 10k limit. Given that accurately predicting rare words, such as named entities, is an important task for many applications, the lack of a long tail is problematic.
Other larger scale language modeling datasets exist. Unfortunately, they either have restrictive licensing which prevents widespread use or have randomized sentence ordering (Chelba et al., 2013) which is unrealistic for most language use and prevents the effective learning and evaluation of longer term dependencies. Hence, we constructed a language modeling dataset using text extracted from Wikipedia and have made this available to the community.
4.3 CONSTRUCTION AND PRE-PROCESSING
We selected articles only fitting the Good or Featured article criteria specified by editors on Wikipedia. These articles have been reviewed by humans and are considered well written, factually accurate, broad in coverage, neutral in point of view, and stable. This resulted in 23,805 Good
articles and 4,790 Featured articles. The text for each article was extracted using the Wikipedia API. Extracting text from Wikipedia mark-up is nontrivial due to the large number of macros in use, used for metric conversions, abbreviations, language notation, and date handling.
Once extracted, specific sections which primarily featured lists were removed by default. Other minor bugs, such as sort keys and Edit buttons that leaked in from the HTML, were also removed. Mathematical formulae and LATEX code were replaced with 〈formula〉 tokens. Normalization and tokenization were performed using the Moses tokenizer (Koehn et al., 2007), slightly augmented to further split numbers (8,600 → 8 @,@ 600) and with some additional minor fixes. Following Chelba et al. (2013) a vocabulary was constructed by discarding all words with a count below 3. Words outside of the vocabulary were mapped to the 〈unk〉 token, also a part of the vocabulary.
4.4 STATISTICS
The full WikiText dataset is over 103 million words in size, a hundred times larger than the PTB. It is also a tenth the size of the One Billion Word Benchmark (Chelba et al., 2013), one of the largest publicly available language modeling benchmarks, whilst consisting of articles that allow for the capture and usage of longer term dependencies as might be found in many real world tasks.
The dataset is available in two different sizes: WikiText-2 and WikiText-103. Both feature punctuation, original casing, a larger vocabulary, and numbers. WikiText-2 is two times the size of the Penn Treebank dataset. WikiText-103 features all extracted articles. Both datasets use the same articles for validation and testing, only differing in the vocabularies. For full statistics, refer to Table 1.
5 EXPERIMENTS
5.1 TRAINING DETAILS
As the pointer sentinel mixture model uses the outputs of the RNN from up to L timesteps back, this presents a challenge for training. If we do not regenerate the stale historical outputs of the RNN when we update the gradients, backpropagation through these stale outputs may result in incorrect gradient updates. If we do regenerate all stale outputs of the RNN, the training process is far slower. As we can make no theoretical guarantees on the impact of stale outputs on gradient updates, we opt to regenerate the window of RNN outputs used by the pointer component after each gradient update.
We also use truncated backpropagation through time (BPTT) in a different manner to many other RNN language models. Truncated BPTT allows for practical time-efficient training of RNN models but has fundamental trade-offs that are rarely discussed. For running truncated BPTT, BPTT is run for k2 timesteps once every k1 timesteps. For many RNN language modeling training schemes, k1 = k2, meaning that every k timesteps truncated BPTT is performed for the k previous timesteps. This results in only a single RNN output receiving backpropagation for k timesteps, with the other extreme being that the first token receives backpropagation for 0 timesteps. As such, most words in the training data will never experience a full backpropagation for k timesteps.
In our task, the pointer component always looks L timesteps into the past if L past timesteps are available. We select k1 = 1 and k2 = L such that for each timestep we perform backpropagation for L timesteps and advance one timestep at a time. Only the loss for the final predicted word is used for backpropagation through the window.
5.2 MODEL DETAILS
Our experimental setup reflects that of Zaremba et al. (2014) and Gal (2015). We increased the number of timesteps used during training from 35 to 100, matching the length of the window L. Batch size was increased to 32 from 20. We also halve the learning rate when validation perplexity is worse than the previous iteration, stopping training when validation perplexity fails to improve for three epochs or when 64 epochs are reached. The gradients are rescaled if their global norm exceeds 1 (Pascanu et al., 2013b).3 We evaluate the medium model configuration which features a two layer
3The highly aggressive clipping is likely due to the increased BPTT length. Even with such clipping early batches may experience excessively high perplexity, though this settles rapidly.
LSTM of hidden size 650. We compare against the large model configuration which features a two layer LSTM of hidden size 1500.
We produce results for two model types, an LSTM model that uses dropout regularization and the pointer sentinel-LSTM model. The variants of dropout used were zoneout (Krueger et al., 2016) and variational inference based dropout (Gal, 2015). Zoneout, which stochastically forces some recurrent units to maintain their previous values, was used for the recurrent connections within the LSTM. Variational inference based dropout, where the dropout mask for a layer is locked across timesteps, was used on the input to each RNN layer and also on the output of the final RNN layer. We used a value of 0.5 for both dropout connections.
5.3 COMPARISON OVER PENN TREEBANK
Table 2 compares the pointer sentinel-LSTM to a variety of other models on the Penn Treebank dataset. The pointer sentinel-LSTM achieves the lowest perplexity, followed by the recent Recurrent Highway Networks (Zilly et al., 2016). The medium pointer sentinel-LSTM model also achieves lower perplexity than the large LSTM models. Note that the best performing large variational LSTM model uses computationally intensive Monte Carlo (MC) dropout averaging. Monte Carlo dropout averaging is a general improvement for any sequence model that uses dropout but comes at a greatly increased test time cost. In Gal (2015) it requires rerunning the test model with 1000 different dropout masks. The pointer sentinel-LSTM is able to achieve these results with far fewer parameters than other models with comparable performance, specifically with less than a third the parameters used in the large variational LSTM models.
We also test a variational LSTM that uses zoneout, which serves as the RNN component of our pointer sentinel-LSTM mixture. This variational LSTM model performs BPTT for the same length L as the pointer sentinel-LSTM, where L = 100 timesteps. The results for this model ablation are worse than that of Gal (2015)’s variational LSTM without Monte Carlo dropout averaging.
5.4 COMPARISON OVER WIKITEXT-2
As WikiText-2 is being introduced in this dataset, there are no existing baselines. We provide two baselines to compare the pointer sentinel-LSTM against: our variational LSTM using zoneout and
the medium variational LSTM used in Gal (2015).4 Attempts to run the Gal (2015) large model variant, a two layer LSTM with hidden size 1500, resulted in out of memory errors on a 12GB K80 GPU, likely due to the increased vocabulary size. We chose the best hyperparameters from PTB experiments for all models. Table 3 shows a similar gain made by the pointer sentinel-LSTM over the variational LSTM models. The variational LSTM from Gal (2015) again beats out the variational LSTM used as a base for our experiments.
6 ANALYSIS
6.1 IMPACT ON RARE WORDS
A hypothesis as to why the pointer sentinel-LSTM can outperform an LSTM is that the pointer component allows the model to effectively reproduce rare words. The RNN may better use hidden state capacity by relying on the pointer component. The pointer component may also allow for a sharper selection of a single word than may be possible using only the softmax.
The appendix contains a graph which shows the improvement of perplexity when comparing the LSTM to the pointer sentinel-LSTM. Words are split across buckets according to frequency. As the words become rarer, the pointer sentinel-LSTM has stronger improvements in perplexity. Even on the Penn Treebank, where there is a relative absence of rare words due to only selecting the most frequent 10k words, we can see the pointer sentinel-LSTM mixture model provides a direct benefit.
While the improvements are largest on rare words, we can see the pointer sentinel-LSTM is still helpful on relatively frequent words. This may be the pointer component directly selecting the word or through the pointer supervision signal improving the RNN by allowing gradients to flow directly to other occurrences of the word in that window.
6.2 QUALITATIVE ANALYSIS OF POINTER USAGE
In a qualitative analysis, we visualized the gate use and pointer attention for a variety of examples in the validation set, focusing on predictions where the gate primarily used the pointer component. These visualizations are available in the appendix.
As expected, the pointer component is heavily used for rare names such as Seidman (23 times in training), Iverson (7 times in training), and Rosenthal (3 times in training). The pointer component was also heavily used when it came to other named entity names such as companies like Honeywell (8 times in training) and Integrated (41 times in training, though due to lowercasing of words this includes integrated circuits, fully integrated, and other generic usage). Surprisingly, the pointer component was also used for many frequent tokens. For selecting units of measurement (tons, kilograms, . . . ) or the short scale of numbers (thousands, millions, billions, . . . ), the pointer would refer to recent usage. This is to be expected, especially when phrases are of the form increased from N tons to N tons. The model can even be found relying on a mixture of the softmax and the pointer for predicting frequent verbs such as said.
Finally, the pointer component can be seen pointing to words at the very end of the 100 word window (position 97), a far longer horizon than the 35 steps that most language models truncate their backpropagation training to. This illustrates why the gating function must be integrated into the pointer component. If the gating function could only use the RNN hidden state, it would need to be wary of words that were near the tail of the pointer, especially if it was not able to accurately
4https://github.com/yaringal/BayesianRNN
track exactly how long it was since seeing a word. By integrating the gating function into the pointer component, we avoid the RNN hidden state having to maintain this intensive bookkeeping.
7 CONCLUSION
We introduced the pointer sentinel mixture model and the WikiText language modeling dataset. The pointer sentinel mixture model can be applied to any classifier that ends in a softmax, including various recurrent neural network building blocks. When applied to a standard LSTM, the pointer sentinel-LSTM achieves state of the art results in language modeling over the Penn Treebank while using few additional parameters and little additional computational complexity at prediction time.
We have also motivated the need to move from Penn Treebank to a new language modeling dataset for long range dependencies, providing WikiText-2 and WikiText-103 as potential options. We hope these new datasets can serve as a platform to improve handling of rare words and the usage of long term dependencies in language modeling. | 1. What is the focus of the paper, and how does it extend previous works on pointer models?
2. What is the novelty of the approach proposed in the paper, and how does it differ from previous attempts to combine pointer-based and standard models?
3. What are the strengths of the paper, particularly in terms of clarity and results?
4. What is the issue with the notation in Equations 3 and 5, and how could it be improved? | Review | Review
This work is an extension of previous works on pointer models, that mixes its outputs with standard softmax outputs.
The idea is appealing in general for context biasing and the specific approach appears quite simple.
The idea is novel to some extent, as previous paper had already tried to combine pointer-based and standard models,
but not as a mixture model, as in this paper.
The paper is clearly written and the results seem promising.
The new dataset the authors created (WikiText) also seems of high interest.
A comment regarding notation:
The symbol p_ptr is used in two different ways in eq. 3 and eq. 5. : p_ptr(w) vs. p_ptr(y_i|x_i)
This is confusing as these are two different domains: for eq 3. the domain is a *set* of words and for eq. 5 the domain is a *list* of context words.
It would be helpful to use different symbol for the two objects. |
ICLR | Title
LAVA: Data Valuation without Pre-Specified Learning Algorithms
Abstract
Traditionally, data valuation is posed as a problem of equitably splitting the validation performance of a learning algorithm among the training data. As a result, the calculated data values depend on many design choices of the underlying learning algorithm. However, this dependence is undesirable for many use cases of data valuation, such as setting priorities over different data sources in a data acquisition process and informing pricing mechanisms in a data marketplace. In these scenarios, data needs to be valued before the actual analysis and the choice of the learning algorithm is still undetermined then. Another side-effect of the dependence is that to assess the value of individual points, one needs to re-run the learning algorithm with and without a point, which incurs a large computation burden. This work leapfrogs over the current limits of data valuation methods by introducing a new framework that can value training data in a way that is oblivious to the downstream learning algorithm. Our main results are as follows. (1) We develop a proxy for the validation performance associated with a training set based on a non-conventional class-wise Wasserstein distance between the training and the validation set. We show that the distance characterizes the upper bound of the validation performance for any given model under certain Lipschitz conditions. (2) We develop a novel method to value individual data based on the sensitivity analysis of the class-wise Wasserstein distance. Importantly, these values can be directly obtained for free from the output of off-the-shelf optimization solvers when computing the distance. (3) We evaluate our new data valuation framework over various use cases related to detecting low-quality data and show that, surprisingly, the learning-agnostic feature of our framework enables a significant improvement over the state-of-the-art performance while being orders of magnitude faster.
1 INTRODUCTION
Advances in machine learning (ML) crucially rely on the availability of large, relevant, and highquality datasets. However, real-world data sources often come in different sizes, relevance levels, and qualities, differing in their value for an ML task. Hence, a fundamental question is how to quantify the value of individual data sources. Data valuation has a wide range of use cases both within the domain of ML and beyond. It can help practitioners enhance the model performance through prioritizing high-value data sources (Ghorbani & Zou, 2019), and it allows one to make strategic and economic decisions in data exchange (Scelta et al., 2019).
In the past literature (Ghorbani & Zou, 2019; Jia et al., 2019b; Kwon & Zou, 2021), data valuation is posed as a problem of equitably splitting the validation performance of a given learning algorithm among the training data. Formally, given a training dataset Dt = {zi}Ni=1, a validation dataset Dv , a learning algorithm A, and a model performance metric PERF (e.g., classification accuracy), a utility function is first defined over all subsets S ⊆ Dt of the training data: U(S) := PERF(A(S)). Then, the objective of data valuation is to find a score vector s ∈ RN that represents the allocation to each datapoint. For instance, one simple way to value a point zi is through leave-one-out (LOO) error U(Dt)− U(Dt \ {zi}), i.e., the change of model performance when the point is excluded from training. Most of the recent works have leveraged concepts originating from cooperative game theory (CGT), such as the Shapley value (Ghorbani & Zou, 2019; Jia et al., 2019b), Banzhaf value (Wang
∗Equal contribution. Repository publicly available on Github: https://github.com/ruoxi-jia-group/LAVA.
& Jia, 2022), general semivalues (Kwon & Zou, 2021), and Least cores (Yan & Procaccia, 2021) to value data. Like the LOO, all of these concepts are defined based on the utility function.
Since the utility function is defined w.r.t. a specific learning algorithm, the data values calculated from the utility function also depend on the learning algorithm. In practice, there are many choice points pertaining to a learning algorithm, such as the model to be trained, the type of learning algorithm, as well as the hyperparameters. The detailed settings of the learning algorithms are often derived from data analysis. However, in many critical applications of data valuation such as informing data acquisition priorities and designing data pricing mechanism, data needs to be valued before the actual analysis and the choice points of the learning algorithm are still undetermined at that time. This gap presents a main hurdle for deploying existing data valuation schemes in the real world.
The reliance on learning algorithms also makes existing data valuation schemes difficult to scale to large datasets. The exact evaluation of LOO error and CGT-based data value notions require evaluating utility functions over different subsets and each evaluation entails retraining the model on that subset: the number of retraining times is linear in the number of data points for the former, and exponential for the latter. While existing works have proposed a variety of approximation algorithms, scaling up the calculation of these notions to large datasets remains expensive. Further, learning-algorithm-dependent approaches rely on the performance scores associated with models trained on different subsets to determine the value of data; thus, they are susceptible to noise due to training stochasticity when the learning algorithm is randomized (e.g., SGD) (Wang & Jia, 2022).
This work addresses these limitations by introducing a learning-agnostic data valuation (LAVA) framework. LAVA is able to produce efficient and useful estimates of data value in a way that is oblivious to downstream learning algorithms. Our technical contributions are listed as follows.
Proxy for validation performance. We propose a proxy for the validation performance associated with a training set based on the non-conventional class-wise Wasserstein distance (Alvarez-Melis & Fusi, 2020) between the training and the validation set. The hierarchically-defined Wasserstein distance utilizes a hybrid Euclidean-Wasserstein cost function to compare the feature-label pairs across datasets. We show that this distance characterizes the upper bound of the validation performance of any given models under certain Lipschitz conditions.
Sensitivity-analysis-based data valuation. We develop a method to assess the value of an individual training point by analyzing the sensitivity of the particular Wasserstein distance to the perturbations on the corresponding probability mass. The values can be directly obtained for free from the output of off-the-shelf optimization solvers once the Wasserstein distance is computed. As the Wasserstein distance can be solved much more efficiently with entropy regularization (Cuturi, 2013), in our experiments, we utilize the duals of the entropy-regularized program to approximate the sensitivity. Remarkably, we show that the gap between two data values under the original non-regularized Wasserstein distance can be recovered exactly from the solutions to the regularized program.
State-of-the-art performance for differentiating data quality. We evaluate LAVA over a wide range of use cases, including detecting mislabeled data, backdoor attacks, poisoning attacks, noisy features, and task-irrelevant data, in which some of these are first conducted in the data valuation setting. Our results show that, surprisingly, the learning-agnostic feature of our framework enables a significant performance improvement over existing methods, while being orders of magnitude faster.
2 MEASURING DATASET UTILITY VIA OPTIMAL TRANSPORT
In this section, we consider the problem of quantifying training data utility U(Dt) without the knowledge of learning algorithms. Similar to most of the existing data valuation frameworks, we assume access to a set of validation points Dv. Our idea is inspired by recent work on using the hierarchically-defined Wasserstein distance to characterize the relatedness of two datasets (AlvarezMelis & Fusi, 2020). Our contribution here is to apply that particular Wasserstein distance to the data valuation problem and provide a theoretical result that connects the distance to validation performance of a model, which might be of independent interest.
2.1 OPTIMAL TRANSPORT-BASED DATASET DISTANCE
Background on Optimal Transport (OT). OT is a celebrated choice for measuring the discrepancy between probability distributions (Villani, 2009). Compared to other notable dissimilarity measures such as the Kullback-Leibler Divergence (Kullback & Leibler, 1951) or Maximum Mean Discrepan-
cies (MMD) (Szekely et al., 2005), the mathematically well-defined OT distance has advantageous analytical properties. For instance, OT is a distance metric, being computationally tractable and computable from finite samples (Genevay et al., 2018; Feydy et al., 2019).
The Kantorovich formulation (Kantorovich, 1942) defines the OT problem as a Linear Program (LP). Given probability measures µt, µv over the space Z , the OT problem is defined as OT(µt, µv) := minπ∈Π(µt,µv) ∫ Z2 C(z, z
′)dπ(z, z′) where Π(µt, µv) :={ π ∈ P(Z × Z) | ∫ Z π(z, z ′)dz = µt, ∫ Z π(z, z ′)dz′ = µv }
denotes a collection of couplings between two distributions µt and µv and C : Z × Z → R+ is some symmetric positive cost function (with C(z, z) = 0), respectively. If C(z, z′) is the Euclidean distance between z and z′ according to the distance metric d, then OT(µt, µv) is 2-Wasserstein distance, which we denote as WC(µt, µv) = Wd(µt, µv) := OT(µt, µv). In this work, the notation OT and W are used interchangeably, with a slight difference that we use OT to emphasize various of its formulations while W specifies on which distance metric it is computed.
Measuring Dataset Distance. We consider a multi-label setting where we denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, respectively, where V is the number of different labels. Given the training set Dt = {(xi, ft(xi))}Ni=1 of size N , and the validation set Dv = {(x′i, fv(x′i))}Mi=1 of size M , one can construct discrete measures µt(x, y) := 1 N ∑N i=1 δ(xi,yi) and µv(x, y) := 1 M ∑M i=1 δ(x′i,y′i), where δ is Dirac function. Consider that each datapoint consists of a feature-label pair (xi, yi) ∈ X × Y . While the Euclidean distance naturally provides the metric to measure distance between features, the distance between labels generally lacks a definition. Consequently, we define conditional distributions µt(x|y) := µt(x)I[ft(x)=y]∫ µt(x)I[ft(x)=y]dx and µv(x|y) := µv(x)I[fv(x)=y]∫ µv(x)I[fv(x)=y]dx . Inspired by Alvarez-Melis & Fusi (2020), we measure the distance between two labels in terms of the OT distance between the conditional distributions of the features given each label. Formally, we adopt the following cost function between featurelabel pairs: C((xt, yt), (xv, yv)) := d(xt, xv) + cWd(µt(·|yt), µv(·|yv)), where c ≥ 0 is a weight coefficient. We note that C is a distance metric since Wd is a valid distance metric. With the definition of C, we propose to measure the distance between the training and validation sets using the non-conventional, hierarchically-defined Wasserstein distance between the corresponding discrete measures: WC (µt, µv) = minπ∈Π(µt,µv) ∫ Z2 C (z, z ′) dπ (z, z′) .
Despite its usefulness and potentially broad applications, we note that it remains absent for existing research to explore its theoretical properties or establish applications upon this notion. This work aims to fill this gap by extending in both directions–novel analytical results are presented to provide its theoretical justifications while an original computing framework is proposed that extends its applications to a new scenario of datapoint valuation.
Computational Acceleration via Entropic Regularization. Solving the problem above scales cubically with MN , which is prohibitive for large datasets. Entropy-regularized OT (entropy-OT) becomes a prevailing choice for approximating OT distances as it allows for fastest-known algorithms. Using the iterative Sinkhorn algorithm (Cuturi, 2013) with almost linear time complexity and memory overhead, entropy-OT can be implemented on a large scale with parallel computing (Genevay et al., 2018; Feydy et al., 2019). Given a regularization parameter, ε > 0, entropy-OT can be formulated as OTε(µt, µv) := minπ∈Π(µt,µv) ∫ Z2 C(z, z
′)dπ(z, z′)+εH(π|µt⊗µv), where H(π|µt ⊗ µv) = ∫ Z2 log ( dπ dµtdµv ) dπ. As ε → 0, the dual solutions to the ε-entropy-OT converge to its OT counterparts as long as the latter are unique (Nutz & Wiesel, 2021).
2.2 LOWER Class-Wise Wasserstein Distance ENTAILS BETTER VALIDATION PERFORMANCE
In this paper, we propose to use WC , a non-conventional, class-wise Wasserstein distance w.r.t. the special distance function C defined in 2.1, as a learning-agnostic surrogate of validation performance to measure the utility of training data. Note that while Wasserstein distances have been frequently used to bound the learning performance change due to distribution drift (Courty et al., 2017; Damodaran et al., 2018; Shen et al., 2018; Ge et al., 2021), this paper is the first to bound the performance change by the hierarchically-defined Wasserstein distance with respect to the hybrid cost C. Figure 1 provides an empirical justification for using this novel distance metric as a proxy, and presents a relation between the class-wise Wasserstein distance and a model’s validation performance. Each curve represents a certain dataset trained on a specific model to receive its performance. Since,
each dataset is of different size and structure, their distances will be of different scale. Therefore, we normalize the distances to the same scale to present the relation between the Wasserstein distance and model performance, which shows that despite different datasets and models, with increased distance, the validation performance decreases.
The next theorem theoretically justifies using this Wasserstein distance as a proxy for validation performance of a model. With assumptions on Lipschitzness of the downstream model as well as the labeling functions associated with the training and validation sets (as explicated in Appendix A), we show that the discrepancy between the training and validation performance of a model is bounded by the hierarchically-defined Wasserstein distance between the training and the validation datasets.
Theorem 1. We denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, where V is the number of different labels. Let f : X → [0, 1]V be
the model trained on training data. By definitions, we have that ∥f(·)∥, ∥ft(·)∥, ∥fv(·)∥ ≤ V . Let µt, µv be the training and validation distributions, respectively, and let µt(·|y) and µv(·|y) be the corresponding conditional distributions given label y. Assume that the model f is ϵ-Lipschitz and the loss function L : {0, 1}V × [0, 1]V → R+ is k-Lipschitz in both inputs. Define cost function C between (xv, yv) and (xt, yt) as C((xt, yt), (xv, yv)) := d(xt, xv)+cWd(µt(·|yt), µv(·|yv)), where c is a constant. Under a certain cross-Lipschitzness assumption for ft and fv detailed in Appendix A, we have Ex∼µv(x) [L(fv(x), f(x))] ≤ Ex∼µt(x) [L(ft(x), f(x))] + kϵWC(µt, µv) +O(kV ).
Proofs are deferred to Appendix A. The bound is interesting to interpret. The first term on the right-hand side corresponds to the training performance. In practice, when a model with large enough capacity is used, this term is small. The second one is the exact expression of the Wasserstein distance that we propose to use as a proxy for validation performance. The last error term is due to possible violation of the cross-Lipschitzness assumption for ft and fv. This term will be small if ft and fv assign the same label to close features with high probability. If the last term is small enough, it is possible to use the proposed Wasserstein distance as proxy for validation loss provided that f , ft and fv verify the cross-Lipschitz assumptions. The bound resonates with the empirical observation in Figure 1 that with lower distance between the training and the validation data, the validation loss of the trained model decreases.
3 EFFICIENT VALUATION OF INDIVIDUAL DATAPOINTS
Note that the class-wise Wasserstein distance defined in the previous section can be used to measure the utility for subsets of Dt. Given this utility function, one can potentially use existing CGT-based notions such as the Shapley value to measure the contribution of individual points. However, even approximating these notions requires evaluating the utility function on a large number of subsets, which incurs large extra computation costs. In this section, we introduce a new approach to valuating individual points. Remarkably, our values can be directly obtained for free from the output of off-the-shelf optimization solvers once the proposed Wasserstein distance between the full training and testing datasets is computed.
3.1 DATAPOINT VALUATION VIA PARAMETER SENSITIVITY
OT distance is known to be insensitive to small differences while also being not robust to large deviations (Villani, 2021). This feature is naturally suitable for detecting abnormal datapoints— disregarding normal variations in distances between clean data while being sensitive to abnormal distances of outlying points. We propose to measure individual points’ contribution based on the gradient of the OT distance to perturbations on the probability mass associated with each point.
Gradients are local information. However, unlike widely used influence functions that only hold for infinitesimal perturbation (Koh & Liang, 2017), gradients for LP hold precisely in a local range and still encode partial information beyond that range, making it capable of reliably predicting the change to the OT distance due to adding or removing datapoints without the need of re-calculation. Also, the gradients are directed information, revealing both positive and negative contributions for each
datapoint and allowing one to perform ranking of datapoints based on the gradient values. Finally, the OT distance always considers the collective effect of all datapoints in the dataset.
Leveraging the duality theorem for LP, we rewrite the original OT problem (introduced in 2.1) in the equivalent form: OT(µt, µv) := max(f,g)∈C0(Z)2⟨f, µt⟩ + ⟨g, µv⟩, where C0(Z) is the set of all continuous functions, f and g are the dual variables. Let π∗ and (f∗, g∗) be the corresponding optimal solutions to the primal and dual problems. The Strong Duality Theorem indicates that OT(π∗(µt, µv)) = OT(f
∗, g∗), where the right-hand side is the distance parameterized by µt and µv. From the Sensitivity Theorem (Bertsekas, 1997), we have that the gradient of the distance w.r.t. the probability mass of datapoints in the two datasets can be expressed as follows: ∇µt OT(f∗, g∗) = (f∗)T , ∇µv OT(f∗, g∗) = (g∗)T . Note that the original formulation in 2.1 is always redundant as the constraint ∑N i=1 µt(zi) = ∑M i=1 µv(z ′ i) = 1 is already implied, rendering the dual solution to be non-unique. To address this issue, we first remove any one of the constraints in Π(µt, µv) and make the primal formulation non-degenerate. Then, we assign a value of zero to the dual variable corresponding to that removed primal constraint.
When measuring the gradients of the OT distance w.r.t. the probability mass of a given datapoint in each dataset, we calculate the calibrated gradient as
∂OT(µt, µv)
∂µt(zi) = f∗i − ∑ j∈{1,...N}\i f∗j N − 1 , ∂OT(µt, µv) ∂µv(z′i) = g∗i − ∑ j∈{1,...M}\i g∗j M − 1 , (1)
which represents the rate of change in the OT distance w.r.t the change of the probability mass of a given datapoint along the direction ensuring the probability mass for all datapoints in the dataset always sums up to one (explicitly enforcing the removed constraint). The value of calibrated gradients is independent of the choice of selection during the constraint removal.
Datapoint valuation via calibrated gradients. The calibrated gradients predict how the OT distance changes as more probability mass is shifted to a given datapoint. This can be interpreted as a measure of the contribution of the datapoint to the OT distance. The contribution can be positive or negative, suggesting shifting more probability mass to this datapoint would result in an increase or decrease of the dataset distance, respectively. If we want a training set to match the distribution of the validation dataset, then removing datapoints with large positive gradients while increasing datapoints with large negative gradients can be expected to reduce their OT distance. As we will show later, the calibrated gradients can provide a tool to detect abnormal or irrelevant data in various applications.
Radius for accurate predictions. The Linear Programming theories (Bertsimas & Tsitsiklis, 1997) give that for each non-degenerate optimal solution, we are always able to perturb parameters on the right-hand side of primal constraints (Π(µt, µv) in 2.1) in a small range without affecting the optimal solution to the dual problem. When the perturbation goes beyond a certain range, the dual solution becomes primal infeasible and the optimization problem needs to be solved again. Hence, the calibrated gradients are local information and we would like to know the perturbation radius such that the optimal dual solution remains unchanged—i.e., whether this range is large enough such that the calibrated gradients can accurately predict the actual change to the OT distance. If the perturbation goes beyond this range, the prediction may become inaccurate as the dual solution only encodes partial information about the optimization.
In our evaluation, we find that this range is about 5% to 25% of the probability measure of the datapoint (µ(·)(zi)) for perturbations in both directions and the pattern seems independent of the size of the datasets. This range being less than the probability mass of a datapoint suggests that we are only able to predict the change to the OT distance for removing/adding a datapoint to the dataset approximately, though, the relative error is well acceptable (depicted in Figure 2).
3.2 PRECISE RECOVERY OF RANKING FOR DATA VALUES OBTAINED FROM ENTROPY-OT
Due to computational advantages of the entropy-OT (defined in Eq. 2.1), one needs to resort to the solutions to entropy-OT to calculate data values. We quantify the deviation in the calibrated gradients caused by the entropy regularizer. This analysis provides foundations on the potential impact of the deviation on the applications built on these gradients. Theorem 2. Let OT(µt, µv) and OTε(µt, µv) be the original formulation and entropy penalized formulation (as defined in 2.1) for the OT problem between the empirical measures µt and µv associated with the two datasets Dt and Dv, respectively, where |Dt| = N and |Dv| = M . Then,
for any i ̸= j ̸= k ∈ {1, 2, . . . , N} and o ̸= p ̸= q ∈ {1, 2, . . . ,M}, the difference between the calibrated gradients for two datapoints zi and zk in dataset Dt and the difference for z′p and z′q in Dv can be calculated as ∂OT(µt, µv)
∂ µt(zi) − ∂OT(µt, µv) ∂ µt(zk) = ∂OTε(µt, µv) ∂ µt(zi) − ∂OTε(µt, µv) ∂ µt(zk) −ε· N N − 1 · ( 1 (π∗ε )kj − 1 (π∗ε )ij ) , (2)
∂OT(µt, µv) ∂ µv(z′p) −∂OT(µt, µv) ∂ µv(z′q) = ∂OTε(µt, µv) ∂ µv(z′p) −∂OTε(µt, µv) ∂ µv(z′q) −ε· M M − 1 · ( 1 (π∗ε )qo − 1 (π∗ε )po ) , (3)
where π∗ε is the optimal primal solution to the entropy penalized OT problem defined in 2.1, zj is any datapoint in Dt other than zi or zk, and z′o is any datapoint in Dv other than z′p or z′q .
The gradient difference on the left-hand side of (2) represents the groundtruth value difference between two training points zi and zk as the values are calculated based on the original OT formulation. In practice, for the sake of efficiency, one only solves the regularized formulation instead and, therefore, this groundtruth difference cannot be obtained directly. Theorem 2 nevertheless indicates a very interesting fact that one can calculate the groundtruth difference based on the solutions to the regularized problem, because every term in the right-hand side only depends on the solutions to the regularized problem. Particularly, the groundtruth value difference is equal to the value difference produced by the regularized solutions plus some calibration terms that scale with ε (Nutz & Wiesel, 2021). This result indicates that while it is not possible to obtain individual groundtruth value by solving the regularized problem, one can actually exactly recover the groundtruth value difference based on the regularized solutions. In many applications of data valuation such as data selection, it is the order of data values that matters (Kwon & Zou, 2021). For instance, to filter out low-quality data, one would first rank the datapoints based on their values and then throw the points with lowest values. In these applications, solving the entropy-regularized program is an ideal choice—which is both efficient and recovers the exact ranking of datapoint values. Finally, note that Eq. 3 presents a symmetric result for the calibrated gradients for validation data. In our experiments, we set ϵ = 0.1, rendering the corresponding calibration terms to be negligible. As a result, we can directly use the calibrated gradients solved by the regularized program to rank datapoint values.
4 EXPERIMENTS
In this section, we demonstrate the practical efficacy and efficiency of LAVA on various classification datasets. We compare with nine baselines: (1) Influence functions (INF) (Koh & Liang, 2017), which approximates the LOO error with first-order extrapolation; (2) TracIn-Clean (Pruthi et al., 2020), which accumulates the loss change on validation data during training whenever the training point of interest is sampled; (3) TracIn-Self (Pruthi et al., 2020), which is similar to TracIn-Clean but accumulates the training loss changes; (4) KNN-Shapley (KNN-SV) (Jia et al., 2019a), which
Backdoor Detection (5%) Poison Detection (0.1%) Noisy Features (25%) Noisy Labels (25%)
approximates the Shapley value using K-Nearest-Neighbor as a proxy model; and (5) Random, a setting where we select a random subset from the target dataset. We also consider the popular data valuation approaches: (6) Permutation Sampling-based Shapely value (Perm-SV) (Jia et al., 2019b), (7) Least Cores (LC) (Yan & Procaccia, 2021), (8) TMC-Shapley (TMC-SV) and (9) G-Shapley (G-SV) (Ghorbani & Zou, 2019). Baselines (6)-(9) are, however, computationally infeasible for the scale of data that we study here. So we exclude them from the evaluation of efficacy in different use cases. We also provide a detailed runtime comparison of all baselines. For all methods to be compared, a validation set of 10, 000 samples is assumed. For our method, we first use the validation data to train a deep neural network model PreActResNet18 (He et al., 2016) from scratch for feature extraction. Then, from its output, we compute the class-wise Wasserstein distance and the calibrated gradients for data valuation. Details about datasets, models, hyperparameter settings, and ablation studies of the hyperparameters and validation sizes are provided in Appendix B.
We evaluate on five different use cases of data valuation: detecting backdoor attack, poisoning attack, noisy features, mislabeled data, and irrelevant data. The first four are conventional tasks in the literature and the last one is a new case. All of them have a common goal of identifying “low-quality” training points. To achieve this goal, we rank datapoints in ascending order of their values and remove some number of points with lowest data values. For each removal budget, we calculate the detection rate, i.e., the percentage of the points that are truly bad within the removed points.
Backdoor Attack Detection. A popular technique of introducing backdoors to models is by injecting maliciously constructed data into a training set (Zeng et al., 2021). At test time, any trained model would misclassify inputs patched with a backdoor trigger as the adversarially-desired target class. In the main text, we consider the Trojan Square attack, a popular attack algorithm (Liu et al., 2017), which injects training points that contain a backdoor trigger and are relabeled as a target class. The evaluation of other types of backdoor attacks can be found in Appendix B. To simulate this attack, we select the target attack class Airplane and poison 2500 (5%) samples of the total CIFAR-10 training set (50k) with a square trigger. In Figure 3 I.(a), we compare the detection rates of different data valuation methods. LAVA and TracIn-Clean outperform the others by a large margin. In particular, for LAVA, the first 20% of the points that it removes contain at least 80% of the poisoned data. We also evaluate whether the model trained after the removal still suffers from the backdoor vulnerability. To perform this evaluation, we calculate the attack accuracy, i.e., the accuracy of the model trained on the remaining points to predict backdoored examples as the target label. A successful data removal would yield a lower attack accuracy. Figure 3 I.(b) shows that our method already takes effect in the early stages, whereas other baselines can start defending from the attack only after removing over 13, 000 samples. The efficacy of LAVA is in part attributable to inspection of distances between both features and labels. The backdoored training samples that are poisoned to the target class will be
“unnatural” in that class, i.e., they have a large feature distance from the original samples in the target class. While the poisoned examples contain a small feature perturbation compared to the natural examples from some other classes, their label distance to them is large because their labels are altered.
Poisoning Attack Detection. Poisoning attacks are similar to backdoor attacks in the sense that they both inject adversarial points into the training set to manipulate the prediction of certain test examples. However, poisoning attacks are considered unable to control test examples. We consider a popular attack termed “feature-collision” attack (Shafahi et al., 2018), where we select a target sample from the Cat class test set and blend the selected image with the chosen target class training samples, Frog in our case. In this attack, we do not modify labels and blend the Cat image only into 50 (0.1%) samples of Frog, which makes this attack especially hard to detect. During inference time, we expect the attacked model to consistently classify the chosen Cat as a Frog. In Figure 3 II.(a), we observe that LAVA outperforms all baselines and achieves an 80% detection rate by removing only 11k samples, which is around 60% fewer samples than the highest baseline. Figure 3 II.(b) shows that by removing data according to LAVA ranking, the target model has reduced the confidence of predicting the target Cat sample as a Frog to below 40%. Our technique leverages the fact that the features from a different class are mixed with the features of the poisoned class, which increases the feature distance between the poisoned and non-poisoned Frog examples.
Noisy Feature Detection. While adding small Gaussian noises to training samples may benefit model robustness (Rusak et al., 2020), strong noise, such as due to sensor failure, can significantly affect the model performance. We add strong white noise to 25% of all CIFAR-10 dataset without changing any labels. Our method performs extremely well as shown in Figure 3 III.(a) and detects all 12,500 noisy samples by inspecting less than 15,000 samples. This explains the sudden drop of the model’s accuracy at the removal budget of 15,000 samples in Figure 3 III.(b): the model starts throwing away only clean samples from that point. LAVA performs well in this scenario since the strong noise increases the feature distance significantly.
Mislabeled Data Detection. Due to the prevalence of human labeling errors (Karimi et al., 2020), it is crucial to detect mislabeled samples. We shuffle labels of 25% samples in the CIFAR-10 dataset to random classes. Unlike backdoor and poisoning attacks, this case is especially harder to detect since wrong samples are spread out throughout classes instead of all placed inside a target class. However, as shown in Figure 3 IV.(a), LAVA’s detection rate outperforms other base-
lines and the model performance is maintained even after 20k of removed data (Figure IV.(b)).
Irrelevant Data Detection. Often the collected datasets through web scraping have irrelevant samples in given classes (Northcutt et al., 2021; Tsipras et al., 2020), e.g., in a class of Glasses, we might have both water glass and eyeglasses due to lack of proper inspection or class meaning specification. This case is different from the mislabeled data scenario, in which case the training features are all relevant to the task. Since the irrelevant examples are highly likely to have completely different features than the desired class representation, LAVA is expected to detect these examples. We design an experiment where we remove all images of one specific class from the classification output but split them equally to the other remaining classes as irrelevant images. As shown in Figure 4, the detection result over a class varies based on the distance between that class and the class from which irrelevant images are drawn. For instance, when Deer images are placed into the Truck class, we can detect almost 94% of all Deer images within first 500 removed images. On the other hand, when we place Cat images into dog class, our detection rate drops to 45% within the top 500.
Computational Efficiency. So far, we have focused on the method’s performance without considering the actual runtime. We compare the runtime-performance tradeoff on the CIFAR-10 example of 2000 samples with 10% backdoor data, a scale in which every baseline can be executed in a reasonable time. As shown in Figure 5, our method achieves a significant improvement in efficiency while being able to detect bad data more effectively.
Dependence on Validation Data Size. For current experiments, we have assumed the validation set of size 10K. Such a scale of data is not hard to acquire, as one can get high-quality data from crowdsourcing platforms, such as Amazon Mechanical Turk for $12 per each 1K samples (AWS, 2019). While our method achieves remarkable performance when using 10K validation data, we perform ablation study on much smaller sets (Appendix B.2.1), where LAVA, notably, can still outperform other baselines. As an example on mislabeled data detection, our method with 2K validation data achieves 80% detection rate at data removal budget of 25K (Fig. 9), whereas the best performing baseline achieves such a performance with 5 times bigger validation data, 10K (Fig. 3 IV.(a)). Furthermore, even on a tiny validation set of size 500, LAVA consistently outperforms all the baselines with the same validation size (Fig. 11). This shows that our method remains effective performance for various sizes of validation data.
5 RELATED WORK
Existing data valuation methods include LOO and influence function (Koh & Liang, 2017), the Shapley value (Jia et al., 2019b; Ghorbani & Zou, 2019; Wang & Jia, 2023), the Banzhaf value (Wang & Jia, 2022), Least Cores (Yan & Procaccia, 2021), Beta Shapley (Kwon & Zou, 2021), and reinforcement learning-based method (Yoon et al., 2020). However, they all assume the knowledge of the underlying learning algorithms and suffer large computational complexity. The work of Jia et al. (2019a) has proposed to use K-Nearest Neighbor Classifier as a default proxy model to perform data valuation. While it can be thought of as a learning-
agnostic data valuation method, it is not as effective and efficient as our method in distinguishing data quality. Xu et al. (2021) propose to use the volume to measure the utility of a dataset. Volume is agnostic to learning algorithms and easy to calculate because is defined simply as the square root of the trace of feature matrix inner product. However, the sole dependence on features makes it incapable of detecting bad data caused by labeling errors. Moreover, to evaluate the contribution of individual points, the authors propose to resort to the Shapley value, which would still be expensive for large datasets.
6 DISCUSSION AND OUTLOOK
This paper describes a learning-agnostic data valuation framework. In particular, in contrast to existing methods which typically adopt model validation performance as the utility function, we approximate the utility of a dataset based on its class-wise Wasserstein distance to a given validation set and provide theoretical justification for this approximation. Furthermore, we propose to use the calibrated gradients of the OT distance to value individual datapoints, which can be obtained for free if one uses an off-the-shelf solver to calculate the Wasserstein distance. Importantly, we have tested on various datasets, and our LAVA framework can significantly improve the state-of-the-art performance of using data valuation methods to detect bad data while being substantially more efficient. Due to the stochasticity of ML and the inherent tolerance to noise, it is often challenging to identify low-quality data by inspecting their influence on model performance scores. The take-away from our empirical study is that despite being extensively adopted in the past, low-quality data detection through model performance changes is actually suboptimal; lifting the dependence of data valuation on the actual learning process provides a better pathway to distinguish data quality.
Despite the performance and efficiency improvement, our work still has some limitations. As a result, it opens up many new investigation venues: (1) How to further lift the dependence on validation data? While a validation set representative of the downstream learning task is a common assumption in the ML literature, it may or may not be available during data exchange. (2) Our design could be vulnerable to existing poisons that directly or indirectly minimize the similarity to clean data (Huang et al., 2021; Pan et al., 2022). Further investigation into robust data valuation would be intriguing. (3) Our current method does not have enough flexibility for tasks that aim for goals beyond accuracy, e.g., fairness. Folding other learning goals in is an exciting direction. (4) Customizing the framework to natural language data is also of practical interest.
7 ACKNOWLEDGEMENTS
RJ and the ReDS Lab gratefully acknowledge the support from the Cisco Research Award, the Virginia Tech COE Fellowship, and the NSF CAREER Award. Jiachen T. Wang is supported by Princeton’s Gordon Y. S. Wu Fellowship. YZ is supported by the Amazon Fellowship.
APPENDIX A RESTATEMENT OF THEOREMS AND FULL PROOFS
In this section, we will restate our main results and give full proofs.
A.1 SUMMARY OF NOTATIONS
Let µt, µv be the training distribution and validation distribution, respectively. We denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, where V is the number of different labels. We can then denote the joint distribution of random datalabel pairs (x, ft(x))x∼µt(x) and (x, fv(x))x∼µv(x) as µ ft t and µfvv , respectively, which are the same notations as µt and µv but made with explicit dependence on ft and fv for clarity. The distributions of (ft(x))x∼µt(x), (fv(x))x∼µv(x) are denoted as µft , µfv , respectively. Besides, we define conditional distributions µt(x|y) := µt(x)I[ft(x)=y]∫ µt(x)I[ft(x)=y]dx and µv(x|y) := µv(x)I[fv(x)=y]∫ µv(x)I[fv(x)=y]dx . Let f : X → [0, 1]V be the model trained on training data and L : {0, 1}V × [0, 1]V → R+ be the loss function. We denote π ∈ Π(µ1, µ2) as a coupling between a pair of distributions µ1, µ2 and d : X × X → R as a distance metric function. The 1-Wasserstein distance with respect to distance function d between two distributions µ1, µ2 is defined as Wd(µ1, µ2) := infπ∈Π(µ1,µ2) E
(x,y)∼π [d(x, y)]. More generally, the 1-Wasserstein
distance with respect to cost function C is defined as WC(µ1, µ2) := infπ∈Π(µ1,µ2) E (x,y)∼π [C(x, y)].
A.2 STATEMENT OF ASSUMPTIONS
To prove Theorem 1, we need the concept of probabilistic cross-Lipschitzness, which assumes that two labeling functions should produce consistent labels with high probability on two close instances. Definition 3 (Probabilistic Cross-Lipschitzness). Two labeling functions ft : X → {0, 1}V and fv : X → {0, 1}V are (ϵ, δ)-probabilistic cross-Lipschitz w.r.t. a joint distribution π over X × X if for all ϵ > 0:
P(x1,x2)∼π[∥ft(x1)− fv(x2)∥ > ϵd(x1, x2)] ≤ δ. (4)
Intuitively, given labeling functions ft, fv and a coupling π, we can bound the probability of finding pairs of training and validation instances labelled differently in a (1/ϵ)-ball with respect to π.
Our Assumptions. Assuming that f is an ϵ-Lipschitz function. Given a metric function d(·, ·), we define a cost function C between (xt, yt) and (xv, yv) as
C((xt, yt), (xv, yv)) := d(xt, xv) + cWd(µt(·|yt), µv(·|yv)), (5)
where c is a constant. Let π∗x,y be the coupling between µ ft t , µ fv v such that
π∗x,y := arg inf π∈Π(µftt ,µ fv v ) E((xt,yt),(xv,yv))∼π[C((xt, yt), (xv, yv))]. (6)
We define two couplings π∗ and π̃∗ between µt(x), µv(x) as follows:
π∗(xt, xv) := ∫ Y ∫ Y π∗x,y((xt, yt), (xv, yv)) dytdyv. (7)
For π̃∗, we first need to define a coupling between µft , µfv :
π∗y(yt, yv) := ∫ X ∫ X π∗x,y((xt, yt), (xv, yv)) dxtdxv (8)
and another coupling between µftt , µ fv v :
π̃∗x,y((xt, yt), (xv, yv)) := π ∗ y(yt, yv)µt(xt|yt)µv(xv|yv). (9)
Finally, π̃∗ is constructed as follows:
π̃∗(xt, xv) := ∫ Y ∫ Y π∗y(yt, yv)µt(xt|yt)µv(xv|yv) dytdyv. (10)
It is easy to see that all joint distributions defined above are couplings between the corresponding distribution pairs.
We assume that ft, fv are (ϵtv, δtv)-probabilistic cross-Lipschitz with respect to π̃∗ in metric d. Additionally, we assume that ϵtv/ϵ ≤ c and the loss function L is k-Lipschitz in both inputs. Besides, from their definitions above, we have that ∥f(x)∥, ∥ft(x)∥, ∥fv(x)∥ ≤ V . The assumption of probabilistic cross-Lipschitzness would be violated only when the underlying coupling assigns large probability to pairs of training-validation features that are close enough (within 1/ϵtv-ball) but labeled differently. However, π̃∗ is generally not such a coupling. Note that π∗ is the optimal coupling between training and validation distributions that minimizes a cost function C pertaining to both feature and label space. Hence, π∗y(yt, yv), the marginal distribution of π
∗ over the training and validation label space, tends to assign high probability to those label pairs that agree. On the other hand, π̃∗x,y can be thought of as a coupling that first generates training-validation labels from π∗y and then generates the features in each dataset conditioning on the corresponding labels. Hence, the marginal distribution π̃∗ of training-validation feature pairs generated by π̃∗x,y would assign high likelihood to those features with the same labels. So, conceptually, the probabilistic cross-Lipschitzness assumption should be easily satisfied by π̃∗.
A.3 DETAILED PROOF
Theorem 1 (restated). Given the above assumptions, we have
Ex∼µv(x) [L(fv(x), f(x))] ≤ Ex∼µt(x) [L(ft(x), f(x))] + kϵWC(µ ft t , µ fv v ) + 2kV δtv. (11)
Proof.
Ex∼µv(x)[L(fv(x), f(x))] (12) = Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))] + Ex∼µt(x)[L(ft(x), f(x))] (13) ≤ Ex∼µt(x)[L(ft(x), f(x))] +
∣∣Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))]∣∣ . (14) We bound
∣∣Ex∼µv(x) [L(fv(x), f(x))]− Ex∼µt(x) [L(ft(x), f(x))]∣∣ as follows: ∣∣Ex∼µv(x) [L(fv(x), f(x))]− Ex∼µt(x) [L(ft(x), f(x))]∣∣ (15) =
∣∣∣∣∫ X2 [L(fv(xv), f(xv))− L(ft(xt), f(xt))] dπ∗(xt, xv) ∣∣∣∣ (16)
= ∣∣∣∣∫ X2 [L(fv(xv), f(xv))− L(fv(xv), f(xt)) + L(fv(xv), f(xt))− L(ft(xt), f(xt))] dπ∗(xt, xv) ∣∣∣∣ (17)
≤ ∫ X2
|L(fv(xv), f(xv))− L(fv(xv), f(xt))| dπ∗(xt, xv)︸ ︷︷ ︸ U1
(18)
+ ∫ X2
|L(fv(xv), f(xt))− L(ft(xt), f(xt))| dπ∗(xt, xv)︸ ︷︷ ︸ U2 , (19)
where the last inequality is due to triangle inequality.
Now, we bound U1 and U2 separately. For U1, we have U1 ≤ k ∫ X 2 ∥f(xv)− f(xt)∥ dπ∗(xt, xv) (20)
≤ kϵ ∫ X 2 d(xt, xv) dπ ∗(xt, xv), (21)
where both inequalities are due to Lipschitzness of L and f . In order to bound U2, we first recall that π∗y(yt, yv) = ∫ X ∫ X π ∗ x,y((xt, yt), (xv, yv)) dxtdxv and π̃∗x,y((xt, yt), (xv, yv)) := π ∗ y(yt, yv)µt(xt|yt)µv(xv|yv):
Observe that
U2 = ∫ X 2 ∫ Y2 |L(fv(xv), f(xt))− L(ft(xt), f(xt))| dπ∗x,y((xt, yt), (xv, yv)) (22)
= ∫ Y2 ∫ X 2 |L(yv, f(xt))− L(yt, f(xt))| dπ∗x,y((xt, yt), (xv, yv)) (23)
≤ k ∫ Y2 ∫ X 2 ∥yv − yt∥ dπ∗x,y((xt, yt), (xv, yv)) (24)
= k ∫ Y2 ∥yv − yt∥ dπ∗y(yt, yv), (25)
where the second equality is due to a condition that if yt ̸= ft(xt) or yv ̸= fv(xv), then π∗x,y((xt, yt), (xv, yv)) = 0.
Now we can bound U2 as follows: U2 ≤ k ∫ Y2 ∥yv − yt∥ dπ∗y(yt, yv) (26)
= k ∫ X 2 ∫ Y2 ∥yv − yt∥ dπ̃∗x,y((xt, yt), (xv, yv)) (27)
= k ∫ Y2 ∫ X 2 ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)), (28)
where the last step holds since if yt ̸= ft(xt) or yv ̸= fv(xv) then π̃∗x,y((xt, yt), (xv, yv)) = 0.
Define the region A = {(xt, xv) : ∥fv(xv)− ft(xt)∥ < ϵtvd(xt, xv)}, then
k ∫ Y2 ∫ X 2 ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (29)
= k ∫ Y2 ∫ X 2\A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (30)
+ k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (31)
≤ k ∫ Y2 ∫ X 2\A 2V dπ̃∗x,y((xt, yt), (xv, yv)) (32)
+ k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)). (33)
Let’s define f̃t(xt) = ft(xt) and f̃v(xv) = fv(xv) if (xt, xv) ∈ A, and f̃t(xt) = f̃v(xv) = 0 otherwise (note that ∥f̃v(xv) − f̃t(xt)∥ ≤ ϵtvd(xt, xv) for all (xt, xv) ∈ X 2), then we can bound the second term as follows:
k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (34)
≤ k ∫ Y2 dπ∗y(yt, yv) ∫ A ∥fv(xv)− ft(xt)∥ dµt(xt|yt)dµv(xv|yv) (35)
= k ∫ Y2 dπ∗y(yt, yv) ∫ X 2 ∥∥∥f̃v(xv)− f̃t(xt)∥∥∥ dµt(xt|yt)dµv(xv|yv) (36) = k
∫ Y2 dπ∗y(yt, yv) ∥∥∥Exv∼µv(·|yv)[f̃v(xv)]− Ext∼µv(·|yt)[f̃t(xt)]∥∥∥ (37)
≤ kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)). (38)
Inequality (38) is a consequence of the duality form of the Kantorovich-Rubinstein theorem (Villani (2021), Chapter 1).
Combining two parts, we have U2 ≤ k ∫ Y2 ∫ X 2\A 2V dπ̃∗x,y((xt, yt), (xv, yv)) (39)
+ kδtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)) (40)
≤ 2kV δtv + kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)), (41)
where the last step is due to the probabilistic cross-Lipschitzness of ft, fv with respect to π̃∗x,y .
Now, combining the bound for U1 and U2, we have
Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))] (42) ≤ kϵ ∫ X 2 d(xt, xv)dπ(xt, xv) + 2kV δtv + kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)) (43)
= k ∫ (X×Y)2 [ϵd(xt, xv) + ϵtvWd(µt(·|yt), µv(·|yv))] dπ∗x,y((xt, yt), (xv, yv)) + 2kV δtv (44)
≤ k ∫ (X×Y)2 [ϵd(xt, xv) + cϵWd(µt(·|yt), µv(·|yv))] dπ∗x,y((xt, yt), (xv, yv)) + 2kV δtv (45)
= kϵEπ∗x,y [C((xt, yt), (xv, yv))] + 2kV δtv (46)
= kϵWC(µ ft t , µ fv v ) + 2kV δtv, (47)
where the last step is due to the definition of π∗x,y . This leads to the final conclusion.
Theorem 5 (restated). Let OT(µt, µv) and OTε(µt, µv) be the original formulation and entropy penalized formulation (as defined in Subsection 2.1) for the OT problem between the empirical measures µt and µv associated with the two data sets Dt and Dv, respectively. Then, for any i ̸= j ̸= k ∈ {1, 2, ...N} and o ̸= p ̸= q ∈ {1, 2, ...M}, the difference between the calibrated gradients for two datapoints zi and zk in dataset Dt and the difference for z′p and z ′ q in Dv can be calculated as
∂OT(µt, µv) ∂ µt(zi) − ∂OT(µt, µv) ∂ µt(zk) = ∂OTε(µt, µv) ∂ µt(zi) − ∂OTε(µt, µv) ∂ µt(zk) − ε · N N − 1 · ( 1 (π∗ε )kj − 1 (π∗ε )ij ) ,
∂OT(µt, µv) ∂ µv(z′p) − ∂OT(µt, µv) ∂ µv(z′q) = ∂OTε(µt, µv) ∂ µv(z′p) − ∂OTε(µt, µv) ∂ µv(z′q) − ε · M M − 1 · ( 1 (π∗ε )oq − 1 (π∗ε )op ) ,
where π∗ε is the optimal primal solution to the entropy penalized OT problem, zj is any datapoint in Dt other than zi or zk, z′o is any datapoint in Dv other than z ′ p or z ′ q , |Dt| = N , and |Dv| = M .
Proof. Let L(π, f, g) and Lε(πε, fε, gε) be the Lagrangian functions for original formulation and entropy penalized formulation between the datasets Dt and Dv , respectively, which can be written as
L(π, f, g) = ⟨π, c⟩+ N∑ i=1 fi · (π′i · IN − ai) + M∑ j=1 gj · (I ′M · πj − bj),
Lε(πε, fε, gε) = ⟨πε, c⟩+ ε · N∑ i=1 M∑ j=1 log (πε)ij µt(zi) · µv(zj) + N∑ i=1 (fε)i · [(πε)′i · IM − µt(zi))]
+ M∑ j=1 (gε)j · [I ′N · (πε)j − µv(zj)],
where cN×M is the cost matrix consisting of distances between N datapoints in Dt and M datapoints in Dv, IN = (1, 1, ...1) ∈ RN×1 and I ′M = (1, 1, ...1)T ∈ R1×M , π and (f, g) denote the primal and dual variables, and π′i and πj denote the i th row and jth column in matrix π, respectively.
The first-order necessary condition for optima in Lagrangian Multiplier Theorem
gives that ∇Lπ(π∗, f∗, g∗) = 0 and ∇(Lε)π((πε)∗, (fε)∗, (gε)∗) = 0, where π∗ and (f∗, g∗) denote the optimal solutions to the primal and dual problems, respectively. Thus, for any i ∈ {1, 2, . . . , N} and j ∈ {1, 2, . . . ,M}, we have
∇Lπ(π∗, f∗, g∗)ij = cij + f∗i + g∗j = 0,
∇(Lε)π(π∗ε , f∗ε , g∗ε )ij = cij + ε · 1
(π∗ε )ij + (fε)
∗ i + (gε) ∗ j = 0.
Subtracting, we have
[f∗i − (fε)∗i ] + [ g∗j − (gε)∗j ] − ε · 1
(π∗ε )ij = 0.
Then, for ∀k ̸= i ∈ {1, 2, ...N}, we have
[f∗k − (fε)∗k] + [ g∗j − (gε)∗j ] − ε · 1
(π∗ε )kj = 0.
Subtracting and reorganizing, we get
[(fε) ∗ i − (fε)∗k] = (f∗i − f∗k )− ε ·
[ 1
(π∗ε )ij − 1 (π∗ε )kj
] .
From the definition of the calibrated gradients in Eq.1, we have
∂OT(µt, µv) ∂µt(zi) − ∂OT(µt, µv) ∂µt(zk) = N N − 1 (f∗i − f∗k ) ,
∂OTε(µt, µv) ∂µt(zi) − ∂OTε(µt, µv) ∂µt(zk) = N N − 1 [(fε) ∗ i − (fε)∗k] .
Finally, subtracting and reorganizing, we have
∂OTε(µt, µv) ∂µt(zi) − ∂OTε(µt, µv) ∂µt(zk) = ∂OT(µt, µv) ∂µt(zi) − ∂OT(µt, µv) ∂µt(zk) − ε · N N − 1 · [ 1 (π∗ε )ij − 1 (π∗ε )kj ] .
The proof for the second part of the Theorem is similar.
∂OTε(µt, µv) ∂µv(z′p) − ∂OTε(µt, µv) ∂µv(z′q) = ∂OT(µt, µv) ∂µv(z′p) − ∂OT(µt, µv) ∂µv(z′q) − ε · M M − 1 · [ 1 (π∗ε )op − 1 (π∗ε )oq ] .
Then the proof is complete.
APPENDIX B ADDITIONAL EXPERIMENTAL RESULTS
B.1 EVALUATING DATA VALUATION USE CASES ON DIVERSE DATASETS
(a) (b)
In the main text, we have focused our evaluation on CIFAR-10. Here, we provide experiments to show effectiveness of LAVA on diverse datasets for detecting bad data.
Backdoor Attack Detection. We evaluate another type of backdoor attack (Section 4), which is the Hello Kitty blending attack (Blend) (Chen et al., 2017) that mixes the target class sample with the Hello Kitty image, as illustrated in Figure 8 (B). We attack the German Traffic Sign dataset (GTSRB) on the target class 6 by poisoning 1764 (5%) samples of the whole dataset. Our method achieves the highest detection rate, as shown in Figure 6(a). In particular, the 5000 points with lowest data values contain all poisoned data based on the LAVA data values, while the second best method on this task, KNN-SV, can cover all poisoned examples with around 11,000 samples. Our algorithm performs especially well for this attack, since the label of poisoned data is changed to the target class and the patching trigger is large. Both the label and feature changes contribute to the increase of the OT distance and thus ease the detection.
Noisy Feature Detection. Here, we show the usage of LAVA on the MNIST dataset where 25% of the whole dataset is contaminated by feature noise. Our method still outperforms all the baselines by detecting all noisy data within first 14,000 samples, which is 5,000 less than the best baseline would require, which is shown in Figure 6(b).
Figure 7: Visualization of irrelevant data detection within the CIFAR100 dataset. The left column is one example of the target class and the images on the right columns are selected irrelevant data in the corresponding classes detected by LAVA.
Irrelevant Data. We perform another irrelevant data detection experiment and focus on the CIFAR100 dataset. In Figure 7, we illustrate some of the irrelevant samples detected by LAVA. Intuitively, irrelevant data in the class should be easily detected by LAVA, since the images are far from the representative of the class and increasing the probability mass associated with these images leads to larger distributional distance to the clean validation data.
B.2 ABLATION STUDY
We perform an ablation study on validation size and on the hyperparameters in our method, where we provide insights on the impact of setting changes. We use the mislabeled detection use case and the CIFAR-10 dataset as an example setting for the ablation study.
B.2.1 VALIDATION SIZE
Figure 8: Visualization of each backdoor attack: A) Trojan-SQ attack. B) Blend attack. C) Trojan-WM attack.
For all the experiments in the main text, we use the validation set of size 10,000. Naturally, we want to examine the effect of the size of the validation set on the detection rate of mislabeled data. In Figure 9 (c), we illustrate the performance on the detection rate with smaller validation data sizes: 200, 500, 2, 000, and 5, 000. We observe that even reducing the validation set by half to 5, 000 can largely maintain the detection rate performance. Small validation sets (200, 500, 2, 000) degrade the detection rate by more than 50%. Despite the performance degradation, our detection performance with these small validation sizes is in fact comparable with the baselines in Figure 3 IV.(a) that
leverage the full validation size of 10, 000. Additionally, when restricting LAVA and other baselines to validation set of 500 samples, our method is better than the best baseline for detecting mislabeled data in the 50k CIFAR-10 samples with 25% being mislabeled as shown in Figure 11.
B.2.2 FEATURE WEIGHT
Recall the class-wise Wasserstein distance is defined with respect to the following distance metric: C((xt, yt), (xv, yv)) = d(xt, xv)+ cWd(µt(·|yt), µv(·|yv)). Actually, one can change the relative weight between feature distance d(xt, xv) and the label distance Wd(µt(·|yt), µv(·|yv)). Here, we show the effect of upweighting the feature distance, while keeping the label weight at 1 and the results are illustrated in Figure 9 (a). As we are moving away from uniform weight, the performance on detection rate is decreasing with larger feature weights. With feature weight of 100, our method performs similarly as the random detector. Indeed, as we increase weight on the features, the weight on the label distance is decreased. As the weight reaches 100, our method performs similarly as the feature embedder without knowing label information and hence, the mislabeled detection performance is comparable to the random baseline.
B.2.3 LABEL WEIGHT
Next, we shift focus to label weight. We examine the effect of upweighting the label distance, while keeping the feature weight at 1. In Figure 9 (b), as the label weight increases, the detection rate performance deteriorates. When we increase the label distance, the feature information becomes neglected, which is not as effective as the balanced weights between feature and label distances.
B.2.4 FEATURE EMBEDDER
We use feature embedder to extract features for the feature distance part in our method. We train the feature embedder on the accessible validation set until the convergence on the train accuracy. Different architectures of the embedder might be sensitive to different aspects of the input and thus result in different feature output. Nevertheless, as we observe in Figure 10, the detection performance associated with different model architectures of feature embedder is similar. Hence, in practice, one can flexibly choose the feature embedder to be used in tandem with our method as long as it has large enough capacity. Furthermore, we note that that these feature embedders
have not learned the clean distribution from the validation data, e.g. in CIFAR-10 the model trained on 10K validation data achieves only around 65% accuracy on 50K clean datapoints and the model trained on 500 validation data achieves around 25% accuraracy. We additionally show in Figure 14,15 that our method significantly outperforms the PreActResNet18 model trained directly on validation data of size 500 and 10K in detecting bad data, which clearly distinguishes LAVA from simple feature embedders.
B.3 BALANCING UNBALANCED DATASET
Although machine leaning practitioners might be using clean data for training a model, the dataset can be often unbalanced which can lead to model performance degradation (Thai-Nghe et al., 2009). To recover higher model accuracy, we can rebalance unbalanced datasets by removing points that cause such disproportion. We showcase how LAVA effectively rebalance the dataset by removing points with poor values and keeping points with best values. We consider a CIFAR-10 dataset with a class Frog being unbalanced and containing 5, 000 samples while other classes have only half as much (i.e. 2, 500 samples). In Figure 12, we demonstrate the effectiveness of LAVA valuation which not only reduces the dataset by removing poor value points but also improves the model accuracy. While at the same time other valuation methods were not able to steadily increase the model accuracy and quickly downgraded the model performance, which in turn shows an even stronger effectiveness of our method.
B.4 REDUCING TRAINING SET SIZE
With the growing size of the training dataset, the computation cost and memory overhead naturally increase which might deem impos-
sible for some practitioners with limited resources to train a model. Therefore, the ability to reduce the training dataset size (Sener & Savarese, 2018) will free up some computation burden and thus allow ones with limited resources to fully appreciate the model training process. Motivated by the given challenge, we want to leverage our data valuation method to significantly decrease the training dataset size while maintaining the model performance. Similarly as in the previous section, the idea is to keep a subset of datapoints with best values and remove poor valued ones. To demonstrate the effectiveness of our LAVA’s valuation, we perform such a task on a clean CIFAR-10 dataset with 2, 500 samples from each class and compare with other data valuation methods. As presented in Figure 13, it is demonstrated that the performance is well maintained even with smaller subsets of the original dataset. Remarkably, even reducing a clean training set (25,000 samples) by 15% based on our method’s valuation, the performance can still stay relatively high while outperforming other valuation baselines.
B.5 DATA SUMMARIZATION
With growing dataset sizes, grows the space needed to store data. Thus, the buyer often would like to decrease the dataset to minimize resources but to retain the performance. Unlike reducing training set size as provided in Section B.4, in this experiment, we will select a smaller, representative subset of the whole dataset that can maintain good performance. To measure the performance of each subset, we measure the validation performance of the model trained on that subset subtracted by the validation performance of the model trained on a random subset of the same size, the experiment which is performed in Kwon & Zou (2021). In Figure 16we can observe that our method can select a small subset that performs better than the subsets chosen by the baseline methods most of the time.
B.6 SCALABILITY EXPERIMENT
In the main paper, we have demonstrated time complexity comparison between LAVA and other valuation methods. We have reported runtime comparisons only for 2,000 test samples as this is the scale existing methods can solve in a not excessively long time (within a day). It showcases the advantageous computing efficiency that the proposed approach enjoys over other methods. We further want to emphasize the computational efficiency of LAVA and demonstrate computation efficiency on a larger scale dataset (100,000 samples) with higher dimensions, ImageNet-100. Additionally, we evaluate
other baselines which are able to finish within a day of computation to highlight the advantage of our method as presented in Table 1. Moreover, we highlight the near-linear time complexity of LAVA on CIFAR-10, which shows practical computation efficiency of our method as shown in Figure 17.
B.7 GENERALIZATION TO OTHER TYPES OF BACKDOOR ATTACKS
As we have provided the results of the Trojan square attack (TrojanSQ) (Liu et al., 2017) in Section 4, we now apply LAVA to other backdoor attacks, which are Hello Kitty blending attack (Blend) (Chen et al., 2017) and Trojan watermark attack (Trojan-WM) (Liu et al., 2017), and evaluate the efficacy of our method in detecting different types of backdoor attacks. We simulate these attacks by selecting the target class Airplane and poisoning 2, 500 (5%) samples of the CIFAR-10 dataset of size 50, 000. The backdoor trigger adopted in each attack is portrayed in Figure 8. In Figure 18, we observe that our method can achieve superior detection performance on all the attacks considered. The reason is that despite the difference in trigger pattern, all of these attacks modify both the label and the feature of a poisoned image and thus result in the deviation of our distributional distance that is defined over the product space of feature and label.
B.8 IMPLICATIONS OF THE PROPOSED DATA VALUATION METHOD TO REAL-WORLD DATA MARKETPLACES
One concern in the real-world data marketplace is that data is freely replicable. However, replicates of data introduce no new information and therefore the prior work has argued that a data utility function should be robust to direct data copying (Xu et al., 2021). One advantage of using the class-wise Wasserstein distance to measure data utility is that it is robust to duplication. Our method by its natural distributional formulation will ignore duplicates sets. As shown in Table 3, although we have repeated the set even five times more than the original source set, the distance remains the same. Additionally, with small noise
changes in the features, the distance metric is barely affected. Another concern in the real-world marketplace is that one might find a single data that has highest contribution and duplicate it to maximize the profit. However, again due to the nature of our distributional formulation, duplicating a single point multiple times would increase the distance between the training and the validation set due to the imbalance in training distribution caused by copying that point.
B.9 DETAILED EXPERIMENTAL SETTINGS
Datasets and Models. Table 2 summarizes the details of the dataset, the models, as well as their licenses adopted in our experiments.
Hardware. A server with an NVIDIA Tesla P100-PCIE-16GB graphic card is used as the hardware platform in this work.
Software. Dataset OT Dist
For our implementation, we use PyTorch for the main framework (Paszke et al., 2019), assisted by three main libraries, which are otdd (optimal transport calculation setup with datasets) (Alvarez-Melis & Fusi, 2020), geomloss (actual optimal transport calculation) (Feydy et al., 2019), and numpy (tool for array routines) (Harris et al., 2020). | 1. What is the focus of the paper regarding data valuation?
2. What are the strengths of the proposed method, particularly in its theoretical foundation?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions about the method's effectiveness or computational efficiency? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a data valuation method that is oblivious to the downstream learning algorithm. The main idea is to evaluate the training data by a class-wise Wasserstein distance between the training and the validation set. They prove that the class-wise Wasserstein distance approximates the performance for any given model under certain Lipschitz conditions. They also propose a method to evaluate individual data by the sensitivity analysis of this class-wise Wasserstein distance. Finally, the paper empirically evaluates the performance and efficiency of their methods. They show that their method improves the state-of-the-art performance while being orders of magnitude faster.
Strengths And Weaknesses
The paper studies an interesting problem and proposes a novel data valuation method using a natural measure based on the class-wise Wasserstein distance. They provide solid theoretical justification for their approach by proving that their measure characterizes the upper bound of the validation performance of any given models. They also propose an interpretable and easily-computable measure for individual data valuation. Finally, they conducted extensive experiments to test the efficacy and efficiency of their approaches in practical use cases.
Clarity, Quality, Novelty And Reproducibility
The paper is very organized and well-written. The contribution is novel as far as I know. |
ICLR | Title
LAVA: Data Valuation without Pre-Specified Learning Algorithms
Abstract
Traditionally, data valuation is posed as a problem of equitably splitting the validation performance of a learning algorithm among the training data. As a result, the calculated data values depend on many design choices of the underlying learning algorithm. However, this dependence is undesirable for many use cases of data valuation, such as setting priorities over different data sources in a data acquisition process and informing pricing mechanisms in a data marketplace. In these scenarios, data needs to be valued before the actual analysis and the choice of the learning algorithm is still undetermined then. Another side-effect of the dependence is that to assess the value of individual points, one needs to re-run the learning algorithm with and without a point, which incurs a large computation burden. This work leapfrogs over the current limits of data valuation methods by introducing a new framework that can value training data in a way that is oblivious to the downstream learning algorithm. Our main results are as follows. (1) We develop a proxy for the validation performance associated with a training set based on a non-conventional class-wise Wasserstein distance between the training and the validation set. We show that the distance characterizes the upper bound of the validation performance for any given model under certain Lipschitz conditions. (2) We develop a novel method to value individual data based on the sensitivity analysis of the class-wise Wasserstein distance. Importantly, these values can be directly obtained for free from the output of off-the-shelf optimization solvers when computing the distance. (3) We evaluate our new data valuation framework over various use cases related to detecting low-quality data and show that, surprisingly, the learning-agnostic feature of our framework enables a significant improvement over the state-of-the-art performance while being orders of magnitude faster.
1 INTRODUCTION
Advances in machine learning (ML) crucially rely on the availability of large, relevant, and highquality datasets. However, real-world data sources often come in different sizes, relevance levels, and qualities, differing in their value for an ML task. Hence, a fundamental question is how to quantify the value of individual data sources. Data valuation has a wide range of use cases both within the domain of ML and beyond. It can help practitioners enhance the model performance through prioritizing high-value data sources (Ghorbani & Zou, 2019), and it allows one to make strategic and economic decisions in data exchange (Scelta et al., 2019).
In the past literature (Ghorbani & Zou, 2019; Jia et al., 2019b; Kwon & Zou, 2021), data valuation is posed as a problem of equitably splitting the validation performance of a given learning algorithm among the training data. Formally, given a training dataset Dt = {zi}Ni=1, a validation dataset Dv , a learning algorithm A, and a model performance metric PERF (e.g., classification accuracy), a utility function is first defined over all subsets S ⊆ Dt of the training data: U(S) := PERF(A(S)). Then, the objective of data valuation is to find a score vector s ∈ RN that represents the allocation to each datapoint. For instance, one simple way to value a point zi is through leave-one-out (LOO) error U(Dt)− U(Dt \ {zi}), i.e., the change of model performance when the point is excluded from training. Most of the recent works have leveraged concepts originating from cooperative game theory (CGT), such as the Shapley value (Ghorbani & Zou, 2019; Jia et al., 2019b), Banzhaf value (Wang
∗Equal contribution. Repository publicly available on Github: https://github.com/ruoxi-jia-group/LAVA.
& Jia, 2022), general semivalues (Kwon & Zou, 2021), and Least cores (Yan & Procaccia, 2021) to value data. Like the LOO, all of these concepts are defined based on the utility function.
Since the utility function is defined w.r.t. a specific learning algorithm, the data values calculated from the utility function also depend on the learning algorithm. In practice, there are many choice points pertaining to a learning algorithm, such as the model to be trained, the type of learning algorithm, as well as the hyperparameters. The detailed settings of the learning algorithms are often derived from data analysis. However, in many critical applications of data valuation such as informing data acquisition priorities and designing data pricing mechanism, data needs to be valued before the actual analysis and the choice points of the learning algorithm are still undetermined at that time. This gap presents a main hurdle for deploying existing data valuation schemes in the real world.
The reliance on learning algorithms also makes existing data valuation schemes difficult to scale to large datasets. The exact evaluation of LOO error and CGT-based data value notions require evaluating utility functions over different subsets and each evaluation entails retraining the model on that subset: the number of retraining times is linear in the number of data points for the former, and exponential for the latter. While existing works have proposed a variety of approximation algorithms, scaling up the calculation of these notions to large datasets remains expensive. Further, learning-algorithm-dependent approaches rely on the performance scores associated with models trained on different subsets to determine the value of data; thus, they are susceptible to noise due to training stochasticity when the learning algorithm is randomized (e.g., SGD) (Wang & Jia, 2022).
This work addresses these limitations by introducing a learning-agnostic data valuation (LAVA) framework. LAVA is able to produce efficient and useful estimates of data value in a way that is oblivious to downstream learning algorithms. Our technical contributions are listed as follows.
Proxy for validation performance. We propose a proxy for the validation performance associated with a training set based on the non-conventional class-wise Wasserstein distance (Alvarez-Melis & Fusi, 2020) between the training and the validation set. The hierarchically-defined Wasserstein distance utilizes a hybrid Euclidean-Wasserstein cost function to compare the feature-label pairs across datasets. We show that this distance characterizes the upper bound of the validation performance of any given models under certain Lipschitz conditions.
Sensitivity-analysis-based data valuation. We develop a method to assess the value of an individual training point by analyzing the sensitivity of the particular Wasserstein distance to the perturbations on the corresponding probability mass. The values can be directly obtained for free from the output of off-the-shelf optimization solvers once the Wasserstein distance is computed. As the Wasserstein distance can be solved much more efficiently with entropy regularization (Cuturi, 2013), in our experiments, we utilize the duals of the entropy-regularized program to approximate the sensitivity. Remarkably, we show that the gap between two data values under the original non-regularized Wasserstein distance can be recovered exactly from the solutions to the regularized program.
State-of-the-art performance for differentiating data quality. We evaluate LAVA over a wide range of use cases, including detecting mislabeled data, backdoor attacks, poisoning attacks, noisy features, and task-irrelevant data, in which some of these are first conducted in the data valuation setting. Our results show that, surprisingly, the learning-agnostic feature of our framework enables a significant performance improvement over existing methods, while being orders of magnitude faster.
2 MEASURING DATASET UTILITY VIA OPTIMAL TRANSPORT
In this section, we consider the problem of quantifying training data utility U(Dt) without the knowledge of learning algorithms. Similar to most of the existing data valuation frameworks, we assume access to a set of validation points Dv. Our idea is inspired by recent work on using the hierarchically-defined Wasserstein distance to characterize the relatedness of two datasets (AlvarezMelis & Fusi, 2020). Our contribution here is to apply that particular Wasserstein distance to the data valuation problem and provide a theoretical result that connects the distance to validation performance of a model, which might be of independent interest.
2.1 OPTIMAL TRANSPORT-BASED DATASET DISTANCE
Background on Optimal Transport (OT). OT is a celebrated choice for measuring the discrepancy between probability distributions (Villani, 2009). Compared to other notable dissimilarity measures such as the Kullback-Leibler Divergence (Kullback & Leibler, 1951) or Maximum Mean Discrepan-
cies (MMD) (Szekely et al., 2005), the mathematically well-defined OT distance has advantageous analytical properties. For instance, OT is a distance metric, being computationally tractable and computable from finite samples (Genevay et al., 2018; Feydy et al., 2019).
The Kantorovich formulation (Kantorovich, 1942) defines the OT problem as a Linear Program (LP). Given probability measures µt, µv over the space Z , the OT problem is defined as OT(µt, µv) := minπ∈Π(µt,µv) ∫ Z2 C(z, z
′)dπ(z, z′) where Π(µt, µv) :={ π ∈ P(Z × Z) | ∫ Z π(z, z ′)dz = µt, ∫ Z π(z, z ′)dz′ = µv }
denotes a collection of couplings between two distributions µt and µv and C : Z × Z → R+ is some symmetric positive cost function (with C(z, z) = 0), respectively. If C(z, z′) is the Euclidean distance between z and z′ according to the distance metric d, then OT(µt, µv) is 2-Wasserstein distance, which we denote as WC(µt, µv) = Wd(µt, µv) := OT(µt, µv). In this work, the notation OT and W are used interchangeably, with a slight difference that we use OT to emphasize various of its formulations while W specifies on which distance metric it is computed.
Measuring Dataset Distance. We consider a multi-label setting where we denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, respectively, where V is the number of different labels. Given the training set Dt = {(xi, ft(xi))}Ni=1 of size N , and the validation set Dv = {(x′i, fv(x′i))}Mi=1 of size M , one can construct discrete measures µt(x, y) := 1 N ∑N i=1 δ(xi,yi) and µv(x, y) := 1 M ∑M i=1 δ(x′i,y′i), where δ is Dirac function. Consider that each datapoint consists of a feature-label pair (xi, yi) ∈ X × Y . While the Euclidean distance naturally provides the metric to measure distance between features, the distance between labels generally lacks a definition. Consequently, we define conditional distributions µt(x|y) := µt(x)I[ft(x)=y]∫ µt(x)I[ft(x)=y]dx and µv(x|y) := µv(x)I[fv(x)=y]∫ µv(x)I[fv(x)=y]dx . Inspired by Alvarez-Melis & Fusi (2020), we measure the distance between two labels in terms of the OT distance between the conditional distributions of the features given each label. Formally, we adopt the following cost function between featurelabel pairs: C((xt, yt), (xv, yv)) := d(xt, xv) + cWd(µt(·|yt), µv(·|yv)), where c ≥ 0 is a weight coefficient. We note that C is a distance metric since Wd is a valid distance metric. With the definition of C, we propose to measure the distance between the training and validation sets using the non-conventional, hierarchically-defined Wasserstein distance between the corresponding discrete measures: WC (µt, µv) = minπ∈Π(µt,µv) ∫ Z2 C (z, z ′) dπ (z, z′) .
Despite its usefulness and potentially broad applications, we note that it remains absent for existing research to explore its theoretical properties or establish applications upon this notion. This work aims to fill this gap by extending in both directions–novel analytical results are presented to provide its theoretical justifications while an original computing framework is proposed that extends its applications to a new scenario of datapoint valuation.
Computational Acceleration via Entropic Regularization. Solving the problem above scales cubically with MN , which is prohibitive for large datasets. Entropy-regularized OT (entropy-OT) becomes a prevailing choice for approximating OT distances as it allows for fastest-known algorithms. Using the iterative Sinkhorn algorithm (Cuturi, 2013) with almost linear time complexity and memory overhead, entropy-OT can be implemented on a large scale with parallel computing (Genevay et al., 2018; Feydy et al., 2019). Given a regularization parameter, ε > 0, entropy-OT can be formulated as OTε(µt, µv) := minπ∈Π(µt,µv) ∫ Z2 C(z, z
′)dπ(z, z′)+εH(π|µt⊗µv), where H(π|µt ⊗ µv) = ∫ Z2 log ( dπ dµtdµv ) dπ. As ε → 0, the dual solutions to the ε-entropy-OT converge to its OT counterparts as long as the latter are unique (Nutz & Wiesel, 2021).
2.2 LOWER Class-Wise Wasserstein Distance ENTAILS BETTER VALIDATION PERFORMANCE
In this paper, we propose to use WC , a non-conventional, class-wise Wasserstein distance w.r.t. the special distance function C defined in 2.1, as a learning-agnostic surrogate of validation performance to measure the utility of training data. Note that while Wasserstein distances have been frequently used to bound the learning performance change due to distribution drift (Courty et al., 2017; Damodaran et al., 2018; Shen et al., 2018; Ge et al., 2021), this paper is the first to bound the performance change by the hierarchically-defined Wasserstein distance with respect to the hybrid cost C. Figure 1 provides an empirical justification for using this novel distance metric as a proxy, and presents a relation between the class-wise Wasserstein distance and a model’s validation performance. Each curve represents a certain dataset trained on a specific model to receive its performance. Since,
each dataset is of different size and structure, their distances will be of different scale. Therefore, we normalize the distances to the same scale to present the relation between the Wasserstein distance and model performance, which shows that despite different datasets and models, with increased distance, the validation performance decreases.
The next theorem theoretically justifies using this Wasserstein distance as a proxy for validation performance of a model. With assumptions on Lipschitzness of the downstream model as well as the labeling functions associated with the training and validation sets (as explicated in Appendix A), we show that the discrepancy between the training and validation performance of a model is bounded by the hierarchically-defined Wasserstein distance between the training and the validation datasets.
Theorem 1. We denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, where V is the number of different labels. Let f : X → [0, 1]V be
the model trained on training data. By definitions, we have that ∥f(·)∥, ∥ft(·)∥, ∥fv(·)∥ ≤ V . Let µt, µv be the training and validation distributions, respectively, and let µt(·|y) and µv(·|y) be the corresponding conditional distributions given label y. Assume that the model f is ϵ-Lipschitz and the loss function L : {0, 1}V × [0, 1]V → R+ is k-Lipschitz in both inputs. Define cost function C between (xv, yv) and (xt, yt) as C((xt, yt), (xv, yv)) := d(xt, xv)+cWd(µt(·|yt), µv(·|yv)), where c is a constant. Under a certain cross-Lipschitzness assumption for ft and fv detailed in Appendix A, we have Ex∼µv(x) [L(fv(x), f(x))] ≤ Ex∼µt(x) [L(ft(x), f(x))] + kϵWC(µt, µv) +O(kV ).
Proofs are deferred to Appendix A. The bound is interesting to interpret. The first term on the right-hand side corresponds to the training performance. In practice, when a model with large enough capacity is used, this term is small. The second one is the exact expression of the Wasserstein distance that we propose to use as a proxy for validation performance. The last error term is due to possible violation of the cross-Lipschitzness assumption for ft and fv. This term will be small if ft and fv assign the same label to close features with high probability. If the last term is small enough, it is possible to use the proposed Wasserstein distance as proxy for validation loss provided that f , ft and fv verify the cross-Lipschitz assumptions. The bound resonates with the empirical observation in Figure 1 that with lower distance between the training and the validation data, the validation loss of the trained model decreases.
3 EFFICIENT VALUATION OF INDIVIDUAL DATAPOINTS
Note that the class-wise Wasserstein distance defined in the previous section can be used to measure the utility for subsets of Dt. Given this utility function, one can potentially use existing CGT-based notions such as the Shapley value to measure the contribution of individual points. However, even approximating these notions requires evaluating the utility function on a large number of subsets, which incurs large extra computation costs. In this section, we introduce a new approach to valuating individual points. Remarkably, our values can be directly obtained for free from the output of off-the-shelf optimization solvers once the proposed Wasserstein distance between the full training and testing datasets is computed.
3.1 DATAPOINT VALUATION VIA PARAMETER SENSITIVITY
OT distance is known to be insensitive to small differences while also being not robust to large deviations (Villani, 2021). This feature is naturally suitable for detecting abnormal datapoints— disregarding normal variations in distances between clean data while being sensitive to abnormal distances of outlying points. We propose to measure individual points’ contribution based on the gradient of the OT distance to perturbations on the probability mass associated with each point.
Gradients are local information. However, unlike widely used influence functions that only hold for infinitesimal perturbation (Koh & Liang, 2017), gradients for LP hold precisely in a local range and still encode partial information beyond that range, making it capable of reliably predicting the change to the OT distance due to adding or removing datapoints without the need of re-calculation. Also, the gradients are directed information, revealing both positive and negative contributions for each
datapoint and allowing one to perform ranking of datapoints based on the gradient values. Finally, the OT distance always considers the collective effect of all datapoints in the dataset.
Leveraging the duality theorem for LP, we rewrite the original OT problem (introduced in 2.1) in the equivalent form: OT(µt, µv) := max(f,g)∈C0(Z)2⟨f, µt⟩ + ⟨g, µv⟩, where C0(Z) is the set of all continuous functions, f and g are the dual variables. Let π∗ and (f∗, g∗) be the corresponding optimal solutions to the primal and dual problems. The Strong Duality Theorem indicates that OT(π∗(µt, µv)) = OT(f
∗, g∗), where the right-hand side is the distance parameterized by µt and µv. From the Sensitivity Theorem (Bertsekas, 1997), we have that the gradient of the distance w.r.t. the probability mass of datapoints in the two datasets can be expressed as follows: ∇µt OT(f∗, g∗) = (f∗)T , ∇µv OT(f∗, g∗) = (g∗)T . Note that the original formulation in 2.1 is always redundant as the constraint ∑N i=1 µt(zi) = ∑M i=1 µv(z ′ i) = 1 is already implied, rendering the dual solution to be non-unique. To address this issue, we first remove any one of the constraints in Π(µt, µv) and make the primal formulation non-degenerate. Then, we assign a value of zero to the dual variable corresponding to that removed primal constraint.
When measuring the gradients of the OT distance w.r.t. the probability mass of a given datapoint in each dataset, we calculate the calibrated gradient as
∂OT(µt, µv)
∂µt(zi) = f∗i − ∑ j∈{1,...N}\i f∗j N − 1 , ∂OT(µt, µv) ∂µv(z′i) = g∗i − ∑ j∈{1,...M}\i g∗j M − 1 , (1)
which represents the rate of change in the OT distance w.r.t the change of the probability mass of a given datapoint along the direction ensuring the probability mass for all datapoints in the dataset always sums up to one (explicitly enforcing the removed constraint). The value of calibrated gradients is independent of the choice of selection during the constraint removal.
Datapoint valuation via calibrated gradients. The calibrated gradients predict how the OT distance changes as more probability mass is shifted to a given datapoint. This can be interpreted as a measure of the contribution of the datapoint to the OT distance. The contribution can be positive or negative, suggesting shifting more probability mass to this datapoint would result in an increase or decrease of the dataset distance, respectively. If we want a training set to match the distribution of the validation dataset, then removing datapoints with large positive gradients while increasing datapoints with large negative gradients can be expected to reduce their OT distance. As we will show later, the calibrated gradients can provide a tool to detect abnormal or irrelevant data in various applications.
Radius for accurate predictions. The Linear Programming theories (Bertsimas & Tsitsiklis, 1997) give that for each non-degenerate optimal solution, we are always able to perturb parameters on the right-hand side of primal constraints (Π(µt, µv) in 2.1) in a small range without affecting the optimal solution to the dual problem. When the perturbation goes beyond a certain range, the dual solution becomes primal infeasible and the optimization problem needs to be solved again. Hence, the calibrated gradients are local information and we would like to know the perturbation radius such that the optimal dual solution remains unchanged—i.e., whether this range is large enough such that the calibrated gradients can accurately predict the actual change to the OT distance. If the perturbation goes beyond this range, the prediction may become inaccurate as the dual solution only encodes partial information about the optimization.
In our evaluation, we find that this range is about 5% to 25% of the probability measure of the datapoint (µ(·)(zi)) for perturbations in both directions and the pattern seems independent of the size of the datasets. This range being less than the probability mass of a datapoint suggests that we are only able to predict the change to the OT distance for removing/adding a datapoint to the dataset approximately, though, the relative error is well acceptable (depicted in Figure 2).
3.2 PRECISE RECOVERY OF RANKING FOR DATA VALUES OBTAINED FROM ENTROPY-OT
Due to computational advantages of the entropy-OT (defined in Eq. 2.1), one needs to resort to the solutions to entropy-OT to calculate data values. We quantify the deviation in the calibrated gradients caused by the entropy regularizer. This analysis provides foundations on the potential impact of the deviation on the applications built on these gradients. Theorem 2. Let OT(µt, µv) and OTε(µt, µv) be the original formulation and entropy penalized formulation (as defined in 2.1) for the OT problem between the empirical measures µt and µv associated with the two datasets Dt and Dv, respectively, where |Dt| = N and |Dv| = M . Then,
for any i ̸= j ̸= k ∈ {1, 2, . . . , N} and o ̸= p ̸= q ∈ {1, 2, . . . ,M}, the difference between the calibrated gradients for two datapoints zi and zk in dataset Dt and the difference for z′p and z′q in Dv can be calculated as ∂OT(µt, µv)
∂ µt(zi) − ∂OT(µt, µv) ∂ µt(zk) = ∂OTε(µt, µv) ∂ µt(zi) − ∂OTε(µt, µv) ∂ µt(zk) −ε· N N − 1 · ( 1 (π∗ε )kj − 1 (π∗ε )ij ) , (2)
∂OT(µt, µv) ∂ µv(z′p) −∂OT(µt, µv) ∂ µv(z′q) = ∂OTε(µt, µv) ∂ µv(z′p) −∂OTε(µt, µv) ∂ µv(z′q) −ε· M M − 1 · ( 1 (π∗ε )qo − 1 (π∗ε )po ) , (3)
where π∗ε is the optimal primal solution to the entropy penalized OT problem defined in 2.1, zj is any datapoint in Dt other than zi or zk, and z′o is any datapoint in Dv other than z′p or z′q .
The gradient difference on the left-hand side of (2) represents the groundtruth value difference between two training points zi and zk as the values are calculated based on the original OT formulation. In practice, for the sake of efficiency, one only solves the regularized formulation instead and, therefore, this groundtruth difference cannot be obtained directly. Theorem 2 nevertheless indicates a very interesting fact that one can calculate the groundtruth difference based on the solutions to the regularized problem, because every term in the right-hand side only depends on the solutions to the regularized problem. Particularly, the groundtruth value difference is equal to the value difference produced by the regularized solutions plus some calibration terms that scale with ε (Nutz & Wiesel, 2021). This result indicates that while it is not possible to obtain individual groundtruth value by solving the regularized problem, one can actually exactly recover the groundtruth value difference based on the regularized solutions. In many applications of data valuation such as data selection, it is the order of data values that matters (Kwon & Zou, 2021). For instance, to filter out low-quality data, one would first rank the datapoints based on their values and then throw the points with lowest values. In these applications, solving the entropy-regularized program is an ideal choice—which is both efficient and recovers the exact ranking of datapoint values. Finally, note that Eq. 3 presents a symmetric result for the calibrated gradients for validation data. In our experiments, we set ϵ = 0.1, rendering the corresponding calibration terms to be negligible. As a result, we can directly use the calibrated gradients solved by the regularized program to rank datapoint values.
4 EXPERIMENTS
In this section, we demonstrate the practical efficacy and efficiency of LAVA on various classification datasets. We compare with nine baselines: (1) Influence functions (INF) (Koh & Liang, 2017), which approximates the LOO error with first-order extrapolation; (2) TracIn-Clean (Pruthi et al., 2020), which accumulates the loss change on validation data during training whenever the training point of interest is sampled; (3) TracIn-Self (Pruthi et al., 2020), which is similar to TracIn-Clean but accumulates the training loss changes; (4) KNN-Shapley (KNN-SV) (Jia et al., 2019a), which
Backdoor Detection (5%) Poison Detection (0.1%) Noisy Features (25%) Noisy Labels (25%)
approximates the Shapley value using K-Nearest-Neighbor as a proxy model; and (5) Random, a setting where we select a random subset from the target dataset. We also consider the popular data valuation approaches: (6) Permutation Sampling-based Shapely value (Perm-SV) (Jia et al., 2019b), (7) Least Cores (LC) (Yan & Procaccia, 2021), (8) TMC-Shapley (TMC-SV) and (9) G-Shapley (G-SV) (Ghorbani & Zou, 2019). Baselines (6)-(9) are, however, computationally infeasible for the scale of data that we study here. So we exclude them from the evaluation of efficacy in different use cases. We also provide a detailed runtime comparison of all baselines. For all methods to be compared, a validation set of 10, 000 samples is assumed. For our method, we first use the validation data to train a deep neural network model PreActResNet18 (He et al., 2016) from scratch for feature extraction. Then, from its output, we compute the class-wise Wasserstein distance and the calibrated gradients for data valuation. Details about datasets, models, hyperparameter settings, and ablation studies of the hyperparameters and validation sizes are provided in Appendix B.
We evaluate on five different use cases of data valuation: detecting backdoor attack, poisoning attack, noisy features, mislabeled data, and irrelevant data. The first four are conventional tasks in the literature and the last one is a new case. All of them have a common goal of identifying “low-quality” training points. To achieve this goal, we rank datapoints in ascending order of their values and remove some number of points with lowest data values. For each removal budget, we calculate the detection rate, i.e., the percentage of the points that are truly bad within the removed points.
Backdoor Attack Detection. A popular technique of introducing backdoors to models is by injecting maliciously constructed data into a training set (Zeng et al., 2021). At test time, any trained model would misclassify inputs patched with a backdoor trigger as the adversarially-desired target class. In the main text, we consider the Trojan Square attack, a popular attack algorithm (Liu et al., 2017), which injects training points that contain a backdoor trigger and are relabeled as a target class. The evaluation of other types of backdoor attacks can be found in Appendix B. To simulate this attack, we select the target attack class Airplane and poison 2500 (5%) samples of the total CIFAR-10 training set (50k) with a square trigger. In Figure 3 I.(a), we compare the detection rates of different data valuation methods. LAVA and TracIn-Clean outperform the others by a large margin. In particular, for LAVA, the first 20% of the points that it removes contain at least 80% of the poisoned data. We also evaluate whether the model trained after the removal still suffers from the backdoor vulnerability. To perform this evaluation, we calculate the attack accuracy, i.e., the accuracy of the model trained on the remaining points to predict backdoored examples as the target label. A successful data removal would yield a lower attack accuracy. Figure 3 I.(b) shows that our method already takes effect in the early stages, whereas other baselines can start defending from the attack only after removing over 13, 000 samples. The efficacy of LAVA is in part attributable to inspection of distances between both features and labels. The backdoored training samples that are poisoned to the target class will be
“unnatural” in that class, i.e., they have a large feature distance from the original samples in the target class. While the poisoned examples contain a small feature perturbation compared to the natural examples from some other classes, their label distance to them is large because their labels are altered.
Poisoning Attack Detection. Poisoning attacks are similar to backdoor attacks in the sense that they both inject adversarial points into the training set to manipulate the prediction of certain test examples. However, poisoning attacks are considered unable to control test examples. We consider a popular attack termed “feature-collision” attack (Shafahi et al., 2018), where we select a target sample from the Cat class test set and blend the selected image with the chosen target class training samples, Frog in our case. In this attack, we do not modify labels and blend the Cat image only into 50 (0.1%) samples of Frog, which makes this attack especially hard to detect. During inference time, we expect the attacked model to consistently classify the chosen Cat as a Frog. In Figure 3 II.(a), we observe that LAVA outperforms all baselines and achieves an 80% detection rate by removing only 11k samples, which is around 60% fewer samples than the highest baseline. Figure 3 II.(b) shows that by removing data according to LAVA ranking, the target model has reduced the confidence of predicting the target Cat sample as a Frog to below 40%. Our technique leverages the fact that the features from a different class are mixed with the features of the poisoned class, which increases the feature distance between the poisoned and non-poisoned Frog examples.
Noisy Feature Detection. While adding small Gaussian noises to training samples may benefit model robustness (Rusak et al., 2020), strong noise, such as due to sensor failure, can significantly affect the model performance. We add strong white noise to 25% of all CIFAR-10 dataset without changing any labels. Our method performs extremely well as shown in Figure 3 III.(a) and detects all 12,500 noisy samples by inspecting less than 15,000 samples. This explains the sudden drop of the model’s accuracy at the removal budget of 15,000 samples in Figure 3 III.(b): the model starts throwing away only clean samples from that point. LAVA performs well in this scenario since the strong noise increases the feature distance significantly.
Mislabeled Data Detection. Due to the prevalence of human labeling errors (Karimi et al., 2020), it is crucial to detect mislabeled samples. We shuffle labels of 25% samples in the CIFAR-10 dataset to random classes. Unlike backdoor and poisoning attacks, this case is especially harder to detect since wrong samples are spread out throughout classes instead of all placed inside a target class. However, as shown in Figure 3 IV.(a), LAVA’s detection rate outperforms other base-
lines and the model performance is maintained even after 20k of removed data (Figure IV.(b)).
Irrelevant Data Detection. Often the collected datasets through web scraping have irrelevant samples in given classes (Northcutt et al., 2021; Tsipras et al., 2020), e.g., in a class of Glasses, we might have both water glass and eyeglasses due to lack of proper inspection or class meaning specification. This case is different from the mislabeled data scenario, in which case the training features are all relevant to the task. Since the irrelevant examples are highly likely to have completely different features than the desired class representation, LAVA is expected to detect these examples. We design an experiment where we remove all images of one specific class from the classification output but split them equally to the other remaining classes as irrelevant images. As shown in Figure 4, the detection result over a class varies based on the distance between that class and the class from which irrelevant images are drawn. For instance, when Deer images are placed into the Truck class, we can detect almost 94% of all Deer images within first 500 removed images. On the other hand, when we place Cat images into dog class, our detection rate drops to 45% within the top 500.
Computational Efficiency. So far, we have focused on the method’s performance without considering the actual runtime. We compare the runtime-performance tradeoff on the CIFAR-10 example of 2000 samples with 10% backdoor data, a scale in which every baseline can be executed in a reasonable time. As shown in Figure 5, our method achieves a significant improvement in efficiency while being able to detect bad data more effectively.
Dependence on Validation Data Size. For current experiments, we have assumed the validation set of size 10K. Such a scale of data is not hard to acquire, as one can get high-quality data from crowdsourcing platforms, such as Amazon Mechanical Turk for $12 per each 1K samples (AWS, 2019). While our method achieves remarkable performance when using 10K validation data, we perform ablation study on much smaller sets (Appendix B.2.1), where LAVA, notably, can still outperform other baselines. As an example on mislabeled data detection, our method with 2K validation data achieves 80% detection rate at data removal budget of 25K (Fig. 9), whereas the best performing baseline achieves such a performance with 5 times bigger validation data, 10K (Fig. 3 IV.(a)). Furthermore, even on a tiny validation set of size 500, LAVA consistently outperforms all the baselines with the same validation size (Fig. 11). This shows that our method remains effective performance for various sizes of validation data.
5 RELATED WORK
Existing data valuation methods include LOO and influence function (Koh & Liang, 2017), the Shapley value (Jia et al., 2019b; Ghorbani & Zou, 2019; Wang & Jia, 2023), the Banzhaf value (Wang & Jia, 2022), Least Cores (Yan & Procaccia, 2021), Beta Shapley (Kwon & Zou, 2021), and reinforcement learning-based method (Yoon et al., 2020). However, they all assume the knowledge of the underlying learning algorithms and suffer large computational complexity. The work of Jia et al. (2019a) has proposed to use K-Nearest Neighbor Classifier as a default proxy model to perform data valuation. While it can be thought of as a learning-
agnostic data valuation method, it is not as effective and efficient as our method in distinguishing data quality. Xu et al. (2021) propose to use the volume to measure the utility of a dataset. Volume is agnostic to learning algorithms and easy to calculate because is defined simply as the square root of the trace of feature matrix inner product. However, the sole dependence on features makes it incapable of detecting bad data caused by labeling errors. Moreover, to evaluate the contribution of individual points, the authors propose to resort to the Shapley value, which would still be expensive for large datasets.
6 DISCUSSION AND OUTLOOK
This paper describes a learning-agnostic data valuation framework. In particular, in contrast to existing methods which typically adopt model validation performance as the utility function, we approximate the utility of a dataset based on its class-wise Wasserstein distance to a given validation set and provide theoretical justification for this approximation. Furthermore, we propose to use the calibrated gradients of the OT distance to value individual datapoints, which can be obtained for free if one uses an off-the-shelf solver to calculate the Wasserstein distance. Importantly, we have tested on various datasets, and our LAVA framework can significantly improve the state-of-the-art performance of using data valuation methods to detect bad data while being substantially more efficient. Due to the stochasticity of ML and the inherent tolerance to noise, it is often challenging to identify low-quality data by inspecting their influence on model performance scores. The take-away from our empirical study is that despite being extensively adopted in the past, low-quality data detection through model performance changes is actually suboptimal; lifting the dependence of data valuation on the actual learning process provides a better pathway to distinguish data quality.
Despite the performance and efficiency improvement, our work still has some limitations. As a result, it opens up many new investigation venues: (1) How to further lift the dependence on validation data? While a validation set representative of the downstream learning task is a common assumption in the ML literature, it may or may not be available during data exchange. (2) Our design could be vulnerable to existing poisons that directly or indirectly minimize the similarity to clean data (Huang et al., 2021; Pan et al., 2022). Further investigation into robust data valuation would be intriguing. (3) Our current method does not have enough flexibility for tasks that aim for goals beyond accuracy, e.g., fairness. Folding other learning goals in is an exciting direction. (4) Customizing the framework to natural language data is also of practical interest.
7 ACKNOWLEDGEMENTS
RJ and the ReDS Lab gratefully acknowledge the support from the Cisco Research Award, the Virginia Tech COE Fellowship, and the NSF CAREER Award. Jiachen T. Wang is supported by Princeton’s Gordon Y. S. Wu Fellowship. YZ is supported by the Amazon Fellowship.
APPENDIX A RESTATEMENT OF THEOREMS AND FULL PROOFS
In this section, we will restate our main results and give full proofs.
A.1 SUMMARY OF NOTATIONS
Let µt, µv be the training distribution and validation distribution, respectively. We denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, where V is the number of different labels. We can then denote the joint distribution of random datalabel pairs (x, ft(x))x∼µt(x) and (x, fv(x))x∼µv(x) as µ ft t and µfvv , respectively, which are the same notations as µt and µv but made with explicit dependence on ft and fv for clarity. The distributions of (ft(x))x∼µt(x), (fv(x))x∼µv(x) are denoted as µft , µfv , respectively. Besides, we define conditional distributions µt(x|y) := µt(x)I[ft(x)=y]∫ µt(x)I[ft(x)=y]dx and µv(x|y) := µv(x)I[fv(x)=y]∫ µv(x)I[fv(x)=y]dx . Let f : X → [0, 1]V be the model trained on training data and L : {0, 1}V × [0, 1]V → R+ be the loss function. We denote π ∈ Π(µ1, µ2) as a coupling between a pair of distributions µ1, µ2 and d : X × X → R as a distance metric function. The 1-Wasserstein distance with respect to distance function d between two distributions µ1, µ2 is defined as Wd(µ1, µ2) := infπ∈Π(µ1,µ2) E
(x,y)∼π [d(x, y)]. More generally, the 1-Wasserstein
distance with respect to cost function C is defined as WC(µ1, µ2) := infπ∈Π(µ1,µ2) E (x,y)∼π [C(x, y)].
A.2 STATEMENT OF ASSUMPTIONS
To prove Theorem 1, we need the concept of probabilistic cross-Lipschitzness, which assumes that two labeling functions should produce consistent labels with high probability on two close instances. Definition 3 (Probabilistic Cross-Lipschitzness). Two labeling functions ft : X → {0, 1}V and fv : X → {0, 1}V are (ϵ, δ)-probabilistic cross-Lipschitz w.r.t. a joint distribution π over X × X if for all ϵ > 0:
P(x1,x2)∼π[∥ft(x1)− fv(x2)∥ > ϵd(x1, x2)] ≤ δ. (4)
Intuitively, given labeling functions ft, fv and a coupling π, we can bound the probability of finding pairs of training and validation instances labelled differently in a (1/ϵ)-ball with respect to π.
Our Assumptions. Assuming that f is an ϵ-Lipschitz function. Given a metric function d(·, ·), we define a cost function C between (xt, yt) and (xv, yv) as
C((xt, yt), (xv, yv)) := d(xt, xv) + cWd(µt(·|yt), µv(·|yv)), (5)
where c is a constant. Let π∗x,y be the coupling between µ ft t , µ fv v such that
π∗x,y := arg inf π∈Π(µftt ,µ fv v ) E((xt,yt),(xv,yv))∼π[C((xt, yt), (xv, yv))]. (6)
We define two couplings π∗ and π̃∗ between µt(x), µv(x) as follows:
π∗(xt, xv) := ∫ Y ∫ Y π∗x,y((xt, yt), (xv, yv)) dytdyv. (7)
For π̃∗, we first need to define a coupling between µft , µfv :
π∗y(yt, yv) := ∫ X ∫ X π∗x,y((xt, yt), (xv, yv)) dxtdxv (8)
and another coupling between µftt , µ fv v :
π̃∗x,y((xt, yt), (xv, yv)) := π ∗ y(yt, yv)µt(xt|yt)µv(xv|yv). (9)
Finally, π̃∗ is constructed as follows:
π̃∗(xt, xv) := ∫ Y ∫ Y π∗y(yt, yv)µt(xt|yt)µv(xv|yv) dytdyv. (10)
It is easy to see that all joint distributions defined above are couplings between the corresponding distribution pairs.
We assume that ft, fv are (ϵtv, δtv)-probabilistic cross-Lipschitz with respect to π̃∗ in metric d. Additionally, we assume that ϵtv/ϵ ≤ c and the loss function L is k-Lipschitz in both inputs. Besides, from their definitions above, we have that ∥f(x)∥, ∥ft(x)∥, ∥fv(x)∥ ≤ V . The assumption of probabilistic cross-Lipschitzness would be violated only when the underlying coupling assigns large probability to pairs of training-validation features that are close enough (within 1/ϵtv-ball) but labeled differently. However, π̃∗ is generally not such a coupling. Note that π∗ is the optimal coupling between training and validation distributions that minimizes a cost function C pertaining to both feature and label space. Hence, π∗y(yt, yv), the marginal distribution of π
∗ over the training and validation label space, tends to assign high probability to those label pairs that agree. On the other hand, π̃∗x,y can be thought of as a coupling that first generates training-validation labels from π∗y and then generates the features in each dataset conditioning on the corresponding labels. Hence, the marginal distribution π̃∗ of training-validation feature pairs generated by π̃∗x,y would assign high likelihood to those features with the same labels. So, conceptually, the probabilistic cross-Lipschitzness assumption should be easily satisfied by π̃∗.
A.3 DETAILED PROOF
Theorem 1 (restated). Given the above assumptions, we have
Ex∼µv(x) [L(fv(x), f(x))] ≤ Ex∼µt(x) [L(ft(x), f(x))] + kϵWC(µ ft t , µ fv v ) + 2kV δtv. (11)
Proof.
Ex∼µv(x)[L(fv(x), f(x))] (12) = Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))] + Ex∼µt(x)[L(ft(x), f(x))] (13) ≤ Ex∼µt(x)[L(ft(x), f(x))] +
∣∣Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))]∣∣ . (14) We bound
∣∣Ex∼µv(x) [L(fv(x), f(x))]− Ex∼µt(x) [L(ft(x), f(x))]∣∣ as follows: ∣∣Ex∼µv(x) [L(fv(x), f(x))]− Ex∼µt(x) [L(ft(x), f(x))]∣∣ (15) =
∣∣∣∣∫ X2 [L(fv(xv), f(xv))− L(ft(xt), f(xt))] dπ∗(xt, xv) ∣∣∣∣ (16)
= ∣∣∣∣∫ X2 [L(fv(xv), f(xv))− L(fv(xv), f(xt)) + L(fv(xv), f(xt))− L(ft(xt), f(xt))] dπ∗(xt, xv) ∣∣∣∣ (17)
≤ ∫ X2
|L(fv(xv), f(xv))− L(fv(xv), f(xt))| dπ∗(xt, xv)︸ ︷︷ ︸ U1
(18)
+ ∫ X2
|L(fv(xv), f(xt))− L(ft(xt), f(xt))| dπ∗(xt, xv)︸ ︷︷ ︸ U2 , (19)
where the last inequality is due to triangle inequality.
Now, we bound U1 and U2 separately. For U1, we have U1 ≤ k ∫ X 2 ∥f(xv)− f(xt)∥ dπ∗(xt, xv) (20)
≤ kϵ ∫ X 2 d(xt, xv) dπ ∗(xt, xv), (21)
where both inequalities are due to Lipschitzness of L and f . In order to bound U2, we first recall that π∗y(yt, yv) = ∫ X ∫ X π ∗ x,y((xt, yt), (xv, yv)) dxtdxv and π̃∗x,y((xt, yt), (xv, yv)) := π ∗ y(yt, yv)µt(xt|yt)µv(xv|yv):
Observe that
U2 = ∫ X 2 ∫ Y2 |L(fv(xv), f(xt))− L(ft(xt), f(xt))| dπ∗x,y((xt, yt), (xv, yv)) (22)
= ∫ Y2 ∫ X 2 |L(yv, f(xt))− L(yt, f(xt))| dπ∗x,y((xt, yt), (xv, yv)) (23)
≤ k ∫ Y2 ∫ X 2 ∥yv − yt∥ dπ∗x,y((xt, yt), (xv, yv)) (24)
= k ∫ Y2 ∥yv − yt∥ dπ∗y(yt, yv), (25)
where the second equality is due to a condition that if yt ̸= ft(xt) or yv ̸= fv(xv), then π∗x,y((xt, yt), (xv, yv)) = 0.
Now we can bound U2 as follows: U2 ≤ k ∫ Y2 ∥yv − yt∥ dπ∗y(yt, yv) (26)
= k ∫ X 2 ∫ Y2 ∥yv − yt∥ dπ̃∗x,y((xt, yt), (xv, yv)) (27)
= k ∫ Y2 ∫ X 2 ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)), (28)
where the last step holds since if yt ̸= ft(xt) or yv ̸= fv(xv) then π̃∗x,y((xt, yt), (xv, yv)) = 0.
Define the region A = {(xt, xv) : ∥fv(xv)− ft(xt)∥ < ϵtvd(xt, xv)}, then
k ∫ Y2 ∫ X 2 ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (29)
= k ∫ Y2 ∫ X 2\A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (30)
+ k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (31)
≤ k ∫ Y2 ∫ X 2\A 2V dπ̃∗x,y((xt, yt), (xv, yv)) (32)
+ k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)). (33)
Let’s define f̃t(xt) = ft(xt) and f̃v(xv) = fv(xv) if (xt, xv) ∈ A, and f̃t(xt) = f̃v(xv) = 0 otherwise (note that ∥f̃v(xv) − f̃t(xt)∥ ≤ ϵtvd(xt, xv) for all (xt, xv) ∈ X 2), then we can bound the second term as follows:
k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (34)
≤ k ∫ Y2 dπ∗y(yt, yv) ∫ A ∥fv(xv)− ft(xt)∥ dµt(xt|yt)dµv(xv|yv) (35)
= k ∫ Y2 dπ∗y(yt, yv) ∫ X 2 ∥∥∥f̃v(xv)− f̃t(xt)∥∥∥ dµt(xt|yt)dµv(xv|yv) (36) = k
∫ Y2 dπ∗y(yt, yv) ∥∥∥Exv∼µv(·|yv)[f̃v(xv)]− Ext∼µv(·|yt)[f̃t(xt)]∥∥∥ (37)
≤ kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)). (38)
Inequality (38) is a consequence of the duality form of the Kantorovich-Rubinstein theorem (Villani (2021), Chapter 1).
Combining two parts, we have U2 ≤ k ∫ Y2 ∫ X 2\A 2V dπ̃∗x,y((xt, yt), (xv, yv)) (39)
+ kδtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)) (40)
≤ 2kV δtv + kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)), (41)
where the last step is due to the probabilistic cross-Lipschitzness of ft, fv with respect to π̃∗x,y .
Now, combining the bound for U1 and U2, we have
Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))] (42) ≤ kϵ ∫ X 2 d(xt, xv)dπ(xt, xv) + 2kV δtv + kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)) (43)
= k ∫ (X×Y)2 [ϵd(xt, xv) + ϵtvWd(µt(·|yt), µv(·|yv))] dπ∗x,y((xt, yt), (xv, yv)) + 2kV δtv (44)
≤ k ∫ (X×Y)2 [ϵd(xt, xv) + cϵWd(µt(·|yt), µv(·|yv))] dπ∗x,y((xt, yt), (xv, yv)) + 2kV δtv (45)
= kϵEπ∗x,y [C((xt, yt), (xv, yv))] + 2kV δtv (46)
= kϵWC(µ ft t , µ fv v ) + 2kV δtv, (47)
where the last step is due to the definition of π∗x,y . This leads to the final conclusion.
Theorem 5 (restated). Let OT(µt, µv) and OTε(µt, µv) be the original formulation and entropy penalized formulation (as defined in Subsection 2.1) for the OT problem between the empirical measures µt and µv associated with the two data sets Dt and Dv, respectively. Then, for any i ̸= j ̸= k ∈ {1, 2, ...N} and o ̸= p ̸= q ∈ {1, 2, ...M}, the difference between the calibrated gradients for two datapoints zi and zk in dataset Dt and the difference for z′p and z ′ q in Dv can be calculated as
∂OT(µt, µv) ∂ µt(zi) − ∂OT(µt, µv) ∂ µt(zk) = ∂OTε(µt, µv) ∂ µt(zi) − ∂OTε(µt, µv) ∂ µt(zk) − ε · N N − 1 · ( 1 (π∗ε )kj − 1 (π∗ε )ij ) ,
∂OT(µt, µv) ∂ µv(z′p) − ∂OT(µt, µv) ∂ µv(z′q) = ∂OTε(µt, µv) ∂ µv(z′p) − ∂OTε(µt, µv) ∂ µv(z′q) − ε · M M − 1 · ( 1 (π∗ε )oq − 1 (π∗ε )op ) ,
where π∗ε is the optimal primal solution to the entropy penalized OT problem, zj is any datapoint in Dt other than zi or zk, z′o is any datapoint in Dv other than z ′ p or z ′ q , |Dt| = N , and |Dv| = M .
Proof. Let L(π, f, g) and Lε(πε, fε, gε) be the Lagrangian functions for original formulation and entropy penalized formulation between the datasets Dt and Dv , respectively, which can be written as
L(π, f, g) = ⟨π, c⟩+ N∑ i=1 fi · (π′i · IN − ai) + M∑ j=1 gj · (I ′M · πj − bj),
Lε(πε, fε, gε) = ⟨πε, c⟩+ ε · N∑ i=1 M∑ j=1 log (πε)ij µt(zi) · µv(zj) + N∑ i=1 (fε)i · [(πε)′i · IM − µt(zi))]
+ M∑ j=1 (gε)j · [I ′N · (πε)j − µv(zj)],
where cN×M is the cost matrix consisting of distances between N datapoints in Dt and M datapoints in Dv, IN = (1, 1, ...1) ∈ RN×1 and I ′M = (1, 1, ...1)T ∈ R1×M , π and (f, g) denote the primal and dual variables, and π′i and πj denote the i th row and jth column in matrix π, respectively.
The first-order necessary condition for optima in Lagrangian Multiplier Theorem
gives that ∇Lπ(π∗, f∗, g∗) = 0 and ∇(Lε)π((πε)∗, (fε)∗, (gε)∗) = 0, where π∗ and (f∗, g∗) denote the optimal solutions to the primal and dual problems, respectively. Thus, for any i ∈ {1, 2, . . . , N} and j ∈ {1, 2, . . . ,M}, we have
∇Lπ(π∗, f∗, g∗)ij = cij + f∗i + g∗j = 0,
∇(Lε)π(π∗ε , f∗ε , g∗ε )ij = cij + ε · 1
(π∗ε )ij + (fε)
∗ i + (gε) ∗ j = 0.
Subtracting, we have
[f∗i − (fε)∗i ] + [ g∗j − (gε)∗j ] − ε · 1
(π∗ε )ij = 0.
Then, for ∀k ̸= i ∈ {1, 2, ...N}, we have
[f∗k − (fε)∗k] + [ g∗j − (gε)∗j ] − ε · 1
(π∗ε )kj = 0.
Subtracting and reorganizing, we get
[(fε) ∗ i − (fε)∗k] = (f∗i − f∗k )− ε ·
[ 1
(π∗ε )ij − 1 (π∗ε )kj
] .
From the definition of the calibrated gradients in Eq.1, we have
∂OT(µt, µv) ∂µt(zi) − ∂OT(µt, µv) ∂µt(zk) = N N − 1 (f∗i − f∗k ) ,
∂OTε(µt, µv) ∂µt(zi) − ∂OTε(µt, µv) ∂µt(zk) = N N − 1 [(fε) ∗ i − (fε)∗k] .
Finally, subtracting and reorganizing, we have
∂OTε(µt, µv) ∂µt(zi) − ∂OTε(µt, µv) ∂µt(zk) = ∂OT(µt, µv) ∂µt(zi) − ∂OT(µt, µv) ∂µt(zk) − ε · N N − 1 · [ 1 (π∗ε )ij − 1 (π∗ε )kj ] .
The proof for the second part of the Theorem is similar.
∂OTε(µt, µv) ∂µv(z′p) − ∂OTε(µt, µv) ∂µv(z′q) = ∂OT(µt, µv) ∂µv(z′p) − ∂OT(µt, µv) ∂µv(z′q) − ε · M M − 1 · [ 1 (π∗ε )op − 1 (π∗ε )oq ] .
Then the proof is complete.
APPENDIX B ADDITIONAL EXPERIMENTAL RESULTS
B.1 EVALUATING DATA VALUATION USE CASES ON DIVERSE DATASETS
(a) (b)
In the main text, we have focused our evaluation on CIFAR-10. Here, we provide experiments to show effectiveness of LAVA on diverse datasets for detecting bad data.
Backdoor Attack Detection. We evaluate another type of backdoor attack (Section 4), which is the Hello Kitty blending attack (Blend) (Chen et al., 2017) that mixes the target class sample with the Hello Kitty image, as illustrated in Figure 8 (B). We attack the German Traffic Sign dataset (GTSRB) on the target class 6 by poisoning 1764 (5%) samples of the whole dataset. Our method achieves the highest detection rate, as shown in Figure 6(a). In particular, the 5000 points with lowest data values contain all poisoned data based on the LAVA data values, while the second best method on this task, KNN-SV, can cover all poisoned examples with around 11,000 samples. Our algorithm performs especially well for this attack, since the label of poisoned data is changed to the target class and the patching trigger is large. Both the label and feature changes contribute to the increase of the OT distance and thus ease the detection.
Noisy Feature Detection. Here, we show the usage of LAVA on the MNIST dataset where 25% of the whole dataset is contaminated by feature noise. Our method still outperforms all the baselines by detecting all noisy data within first 14,000 samples, which is 5,000 less than the best baseline would require, which is shown in Figure 6(b).
Figure 7: Visualization of irrelevant data detection within the CIFAR100 dataset. The left column is one example of the target class and the images on the right columns are selected irrelevant data in the corresponding classes detected by LAVA.
Irrelevant Data. We perform another irrelevant data detection experiment and focus on the CIFAR100 dataset. In Figure 7, we illustrate some of the irrelevant samples detected by LAVA. Intuitively, irrelevant data in the class should be easily detected by LAVA, since the images are far from the representative of the class and increasing the probability mass associated with these images leads to larger distributional distance to the clean validation data.
B.2 ABLATION STUDY
We perform an ablation study on validation size and on the hyperparameters in our method, where we provide insights on the impact of setting changes. We use the mislabeled detection use case and the CIFAR-10 dataset as an example setting for the ablation study.
B.2.1 VALIDATION SIZE
Figure 8: Visualization of each backdoor attack: A) Trojan-SQ attack. B) Blend attack. C) Trojan-WM attack.
For all the experiments in the main text, we use the validation set of size 10,000. Naturally, we want to examine the effect of the size of the validation set on the detection rate of mislabeled data. In Figure 9 (c), we illustrate the performance on the detection rate with smaller validation data sizes: 200, 500, 2, 000, and 5, 000. We observe that even reducing the validation set by half to 5, 000 can largely maintain the detection rate performance. Small validation sets (200, 500, 2, 000) degrade the detection rate by more than 50%. Despite the performance degradation, our detection performance with these small validation sizes is in fact comparable with the baselines in Figure 3 IV.(a) that
leverage the full validation size of 10, 000. Additionally, when restricting LAVA and other baselines to validation set of 500 samples, our method is better than the best baseline for detecting mislabeled data in the 50k CIFAR-10 samples with 25% being mislabeled as shown in Figure 11.
B.2.2 FEATURE WEIGHT
Recall the class-wise Wasserstein distance is defined with respect to the following distance metric: C((xt, yt), (xv, yv)) = d(xt, xv)+ cWd(µt(·|yt), µv(·|yv)). Actually, one can change the relative weight between feature distance d(xt, xv) and the label distance Wd(µt(·|yt), µv(·|yv)). Here, we show the effect of upweighting the feature distance, while keeping the label weight at 1 and the results are illustrated in Figure 9 (a). As we are moving away from uniform weight, the performance on detection rate is decreasing with larger feature weights. With feature weight of 100, our method performs similarly as the random detector. Indeed, as we increase weight on the features, the weight on the label distance is decreased. As the weight reaches 100, our method performs similarly as the feature embedder without knowing label information and hence, the mislabeled detection performance is comparable to the random baseline.
B.2.3 LABEL WEIGHT
Next, we shift focus to label weight. We examine the effect of upweighting the label distance, while keeping the feature weight at 1. In Figure 9 (b), as the label weight increases, the detection rate performance deteriorates. When we increase the label distance, the feature information becomes neglected, which is not as effective as the balanced weights between feature and label distances.
B.2.4 FEATURE EMBEDDER
We use feature embedder to extract features for the feature distance part in our method. We train the feature embedder on the accessible validation set until the convergence on the train accuracy. Different architectures of the embedder might be sensitive to different aspects of the input and thus result in different feature output. Nevertheless, as we observe in Figure 10, the detection performance associated with different model architectures of feature embedder is similar. Hence, in practice, one can flexibly choose the feature embedder to be used in tandem with our method as long as it has large enough capacity. Furthermore, we note that that these feature embedders
have not learned the clean distribution from the validation data, e.g. in CIFAR-10 the model trained on 10K validation data achieves only around 65% accuracy on 50K clean datapoints and the model trained on 500 validation data achieves around 25% accuraracy. We additionally show in Figure 14,15 that our method significantly outperforms the PreActResNet18 model trained directly on validation data of size 500 and 10K in detecting bad data, which clearly distinguishes LAVA from simple feature embedders.
B.3 BALANCING UNBALANCED DATASET
Although machine leaning practitioners might be using clean data for training a model, the dataset can be often unbalanced which can lead to model performance degradation (Thai-Nghe et al., 2009). To recover higher model accuracy, we can rebalance unbalanced datasets by removing points that cause such disproportion. We showcase how LAVA effectively rebalance the dataset by removing points with poor values and keeping points with best values. We consider a CIFAR-10 dataset with a class Frog being unbalanced and containing 5, 000 samples while other classes have only half as much (i.e. 2, 500 samples). In Figure 12, we demonstrate the effectiveness of LAVA valuation which not only reduces the dataset by removing poor value points but also improves the model accuracy. While at the same time other valuation methods were not able to steadily increase the model accuracy and quickly downgraded the model performance, which in turn shows an even stronger effectiveness of our method.
B.4 REDUCING TRAINING SET SIZE
With the growing size of the training dataset, the computation cost and memory overhead naturally increase which might deem impos-
sible for some practitioners with limited resources to train a model. Therefore, the ability to reduce the training dataset size (Sener & Savarese, 2018) will free up some computation burden and thus allow ones with limited resources to fully appreciate the model training process. Motivated by the given challenge, we want to leverage our data valuation method to significantly decrease the training dataset size while maintaining the model performance. Similarly as in the previous section, the idea is to keep a subset of datapoints with best values and remove poor valued ones. To demonstrate the effectiveness of our LAVA’s valuation, we perform such a task on a clean CIFAR-10 dataset with 2, 500 samples from each class and compare with other data valuation methods. As presented in Figure 13, it is demonstrated that the performance is well maintained even with smaller subsets of the original dataset. Remarkably, even reducing a clean training set (25,000 samples) by 15% based on our method’s valuation, the performance can still stay relatively high while outperforming other valuation baselines.
B.5 DATA SUMMARIZATION
With growing dataset sizes, grows the space needed to store data. Thus, the buyer often would like to decrease the dataset to minimize resources but to retain the performance. Unlike reducing training set size as provided in Section B.4, in this experiment, we will select a smaller, representative subset of the whole dataset that can maintain good performance. To measure the performance of each subset, we measure the validation performance of the model trained on that subset subtracted by the validation performance of the model trained on a random subset of the same size, the experiment which is performed in Kwon & Zou (2021). In Figure 16we can observe that our method can select a small subset that performs better than the subsets chosen by the baseline methods most of the time.
B.6 SCALABILITY EXPERIMENT
In the main paper, we have demonstrated time complexity comparison between LAVA and other valuation methods. We have reported runtime comparisons only for 2,000 test samples as this is the scale existing methods can solve in a not excessively long time (within a day). It showcases the advantageous computing efficiency that the proposed approach enjoys over other methods. We further want to emphasize the computational efficiency of LAVA and demonstrate computation efficiency on a larger scale dataset (100,000 samples) with higher dimensions, ImageNet-100. Additionally, we evaluate
other baselines which are able to finish within a day of computation to highlight the advantage of our method as presented in Table 1. Moreover, we highlight the near-linear time complexity of LAVA on CIFAR-10, which shows practical computation efficiency of our method as shown in Figure 17.
B.7 GENERALIZATION TO OTHER TYPES OF BACKDOOR ATTACKS
As we have provided the results of the Trojan square attack (TrojanSQ) (Liu et al., 2017) in Section 4, we now apply LAVA to other backdoor attacks, which are Hello Kitty blending attack (Blend) (Chen et al., 2017) and Trojan watermark attack (Trojan-WM) (Liu et al., 2017), and evaluate the efficacy of our method in detecting different types of backdoor attacks. We simulate these attacks by selecting the target class Airplane and poisoning 2, 500 (5%) samples of the CIFAR-10 dataset of size 50, 000. The backdoor trigger adopted in each attack is portrayed in Figure 8. In Figure 18, we observe that our method can achieve superior detection performance on all the attacks considered. The reason is that despite the difference in trigger pattern, all of these attacks modify both the label and the feature of a poisoned image and thus result in the deviation of our distributional distance that is defined over the product space of feature and label.
B.8 IMPLICATIONS OF THE PROPOSED DATA VALUATION METHOD TO REAL-WORLD DATA MARKETPLACES
One concern in the real-world data marketplace is that data is freely replicable. However, replicates of data introduce no new information and therefore the prior work has argued that a data utility function should be robust to direct data copying (Xu et al., 2021). One advantage of using the class-wise Wasserstein distance to measure data utility is that it is robust to duplication. Our method by its natural distributional formulation will ignore duplicates sets. As shown in Table 3, although we have repeated the set even five times more than the original source set, the distance remains the same. Additionally, with small noise
changes in the features, the distance metric is barely affected. Another concern in the real-world marketplace is that one might find a single data that has highest contribution and duplicate it to maximize the profit. However, again due to the nature of our distributional formulation, duplicating a single point multiple times would increase the distance between the training and the validation set due to the imbalance in training distribution caused by copying that point.
B.9 DETAILED EXPERIMENTAL SETTINGS
Datasets and Models. Table 2 summarizes the details of the dataset, the models, as well as their licenses adopted in our experiments.
Hardware. A server with an NVIDIA Tesla P100-PCIE-16GB graphic card is used as the hardware platform in this work.
Software. Dataset OT Dist
For our implementation, we use PyTorch for the main framework (Paszke et al., 2019), assisted by three main libraries, which are otdd (optimal transport calculation setup with datasets) (Alvarez-Melis & Fusi, 2020), geomloss (actual optimal transport calculation) (Feydy et al., 2019), and numpy (tool for array routines) (Harris et al., 2020). | 1. What is the focus and contribution of the paper on data valuation?
2. What are the strengths of the proposed approach, particularly in its application of the Wasserstein distance?
3. What are the weaknesses of the paper, especially regarding its limitations in valuating data sources and handling duplicate data points?
4. Do you have any questions regarding the paper's experiments and demonstrations?
5. Can the proposed method be applied to explain a particular prediction of a model, similar to Koh & Liang (2017)? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper introduces a data valuation method of individual training data points that does not requires learning algorithms via the gradient of the Wasserstein distance between the training set and validation set w.r.t. the perturbations on the probability mass of a data point. While it is based on existing work on the hierarchically-defined Wasserstein distance between datasets, the paper provides new insight into the connection between this distance and the validation performance which is crucial for data validation. Furthermore, it proposes an efficient data valuation method of individual training data points based on the sensitivity of this distance, which has a wide range of use cases as demonstrated in the experiments.
Strengths And Weaknesses
The strength of the paper is in the application of the existing Wasserstein distance between datasets to the data valuation problem with the new insights on the connection between the distance and the validation performance. Additionally, it demonstrates many use cases of the proposed data valuation method in the experiments. The paper is also well-written so that readers can understand the main idea easily.
The main weakness of the paper is probably due to the fact that the proposed data valuation only works for individual data points. I have several questions as follows.
The proposed approach requires a validation dataset with labels. How would the Wasserstein distance be modified to measure the distance between a training set and a 'reference dataset' without labels?
While the paper motivates the problem of data valuation as a central method to the emerging data economy (in data exchange), the proposed method cannot value data sources (which consist of multiple data points). Hence, I am wondering how this approach can be applied in data exchange (between multiple organizations/companies with different datasets).
It is often the case that there are duplicates of data points (or redundant data points that are very similar to one another) in the real-world dataset. For simplicity, let us consider a dataset consisting of 3 data points x1,x2,y where x1 and x2 are duplicates of each other; and y is different from x1,x2. Intuitively, removing x1 does not affect the value of the dataset (since x2 contains the same information as x1), so the value of x1 is low. Hence, point-wise values of x1 and x2 are low. However, if we remove both x1 and x2 (since they have low valuation), then it is problematic because y does not contain information in x1 and x2. How does the proposed pointwise data valuation method work in this case?
Like the work of Koh & Liang (2017), can the proposed method be used to explain a particular prediction of the model by finding the training data point that is responsible for the prediction? In this case, the analogous notion of a validation set contains only a single data point (the prediction to be explained), so I am wondering if the approach can still work.
Clarity, Quality, Novelty And Reproducibility
While the paper applies the existing Wasserstein distance between datasets to the data valuation problem, it proposes novel perspectives through the new insight into the connection between the distance and the validation performance and the use of gradients in constructing an efficient valuation method. The paper is well-written and easy to follow. The significance of the proposed approach is demonstrated through many use cases in the experiments. |
ICLR | Title
LAVA: Data Valuation without Pre-Specified Learning Algorithms
Abstract
Traditionally, data valuation is posed as a problem of equitably splitting the validation performance of a learning algorithm among the training data. As a result, the calculated data values depend on many design choices of the underlying learning algorithm. However, this dependence is undesirable for many use cases of data valuation, such as setting priorities over different data sources in a data acquisition process and informing pricing mechanisms in a data marketplace. In these scenarios, data needs to be valued before the actual analysis and the choice of the learning algorithm is still undetermined then. Another side-effect of the dependence is that to assess the value of individual points, one needs to re-run the learning algorithm with and without a point, which incurs a large computation burden. This work leapfrogs over the current limits of data valuation methods by introducing a new framework that can value training data in a way that is oblivious to the downstream learning algorithm. Our main results are as follows. (1) We develop a proxy for the validation performance associated with a training set based on a non-conventional class-wise Wasserstein distance between the training and the validation set. We show that the distance characterizes the upper bound of the validation performance for any given model under certain Lipschitz conditions. (2) We develop a novel method to value individual data based on the sensitivity analysis of the class-wise Wasserstein distance. Importantly, these values can be directly obtained for free from the output of off-the-shelf optimization solvers when computing the distance. (3) We evaluate our new data valuation framework over various use cases related to detecting low-quality data and show that, surprisingly, the learning-agnostic feature of our framework enables a significant improvement over the state-of-the-art performance while being orders of magnitude faster.
1 INTRODUCTION
Advances in machine learning (ML) crucially rely on the availability of large, relevant, and highquality datasets. However, real-world data sources often come in different sizes, relevance levels, and qualities, differing in their value for an ML task. Hence, a fundamental question is how to quantify the value of individual data sources. Data valuation has a wide range of use cases both within the domain of ML and beyond. It can help practitioners enhance the model performance through prioritizing high-value data sources (Ghorbani & Zou, 2019), and it allows one to make strategic and economic decisions in data exchange (Scelta et al., 2019).
In the past literature (Ghorbani & Zou, 2019; Jia et al., 2019b; Kwon & Zou, 2021), data valuation is posed as a problem of equitably splitting the validation performance of a given learning algorithm among the training data. Formally, given a training dataset Dt = {zi}Ni=1, a validation dataset Dv , a learning algorithm A, and a model performance metric PERF (e.g., classification accuracy), a utility function is first defined over all subsets S ⊆ Dt of the training data: U(S) := PERF(A(S)). Then, the objective of data valuation is to find a score vector s ∈ RN that represents the allocation to each datapoint. For instance, one simple way to value a point zi is through leave-one-out (LOO) error U(Dt)− U(Dt \ {zi}), i.e., the change of model performance when the point is excluded from training. Most of the recent works have leveraged concepts originating from cooperative game theory (CGT), such as the Shapley value (Ghorbani & Zou, 2019; Jia et al., 2019b), Banzhaf value (Wang
∗Equal contribution. Repository publicly available on Github: https://github.com/ruoxi-jia-group/LAVA.
& Jia, 2022), general semivalues (Kwon & Zou, 2021), and Least cores (Yan & Procaccia, 2021) to value data. Like the LOO, all of these concepts are defined based on the utility function.
Since the utility function is defined w.r.t. a specific learning algorithm, the data values calculated from the utility function also depend on the learning algorithm. In practice, there are many choice points pertaining to a learning algorithm, such as the model to be trained, the type of learning algorithm, as well as the hyperparameters. The detailed settings of the learning algorithms are often derived from data analysis. However, in many critical applications of data valuation such as informing data acquisition priorities and designing data pricing mechanism, data needs to be valued before the actual analysis and the choice points of the learning algorithm are still undetermined at that time. This gap presents a main hurdle for deploying existing data valuation schemes in the real world.
The reliance on learning algorithms also makes existing data valuation schemes difficult to scale to large datasets. The exact evaluation of LOO error and CGT-based data value notions require evaluating utility functions over different subsets and each evaluation entails retraining the model on that subset: the number of retraining times is linear in the number of data points for the former, and exponential for the latter. While existing works have proposed a variety of approximation algorithms, scaling up the calculation of these notions to large datasets remains expensive. Further, learning-algorithm-dependent approaches rely on the performance scores associated with models trained on different subsets to determine the value of data; thus, they are susceptible to noise due to training stochasticity when the learning algorithm is randomized (e.g., SGD) (Wang & Jia, 2022).
This work addresses these limitations by introducing a learning-agnostic data valuation (LAVA) framework. LAVA is able to produce efficient and useful estimates of data value in a way that is oblivious to downstream learning algorithms. Our technical contributions are listed as follows.
Proxy for validation performance. We propose a proxy for the validation performance associated with a training set based on the non-conventional class-wise Wasserstein distance (Alvarez-Melis & Fusi, 2020) between the training and the validation set. The hierarchically-defined Wasserstein distance utilizes a hybrid Euclidean-Wasserstein cost function to compare the feature-label pairs across datasets. We show that this distance characterizes the upper bound of the validation performance of any given models under certain Lipschitz conditions.
Sensitivity-analysis-based data valuation. We develop a method to assess the value of an individual training point by analyzing the sensitivity of the particular Wasserstein distance to the perturbations on the corresponding probability mass. The values can be directly obtained for free from the output of off-the-shelf optimization solvers once the Wasserstein distance is computed. As the Wasserstein distance can be solved much more efficiently with entropy regularization (Cuturi, 2013), in our experiments, we utilize the duals of the entropy-regularized program to approximate the sensitivity. Remarkably, we show that the gap between two data values under the original non-regularized Wasserstein distance can be recovered exactly from the solutions to the regularized program.
State-of-the-art performance for differentiating data quality. We evaluate LAVA over a wide range of use cases, including detecting mislabeled data, backdoor attacks, poisoning attacks, noisy features, and task-irrelevant data, in which some of these are first conducted in the data valuation setting. Our results show that, surprisingly, the learning-agnostic feature of our framework enables a significant performance improvement over existing methods, while being orders of magnitude faster.
2 MEASURING DATASET UTILITY VIA OPTIMAL TRANSPORT
In this section, we consider the problem of quantifying training data utility U(Dt) without the knowledge of learning algorithms. Similar to most of the existing data valuation frameworks, we assume access to a set of validation points Dv. Our idea is inspired by recent work on using the hierarchically-defined Wasserstein distance to characterize the relatedness of two datasets (AlvarezMelis & Fusi, 2020). Our contribution here is to apply that particular Wasserstein distance to the data valuation problem and provide a theoretical result that connects the distance to validation performance of a model, which might be of independent interest.
2.1 OPTIMAL TRANSPORT-BASED DATASET DISTANCE
Background on Optimal Transport (OT). OT is a celebrated choice for measuring the discrepancy between probability distributions (Villani, 2009). Compared to other notable dissimilarity measures such as the Kullback-Leibler Divergence (Kullback & Leibler, 1951) or Maximum Mean Discrepan-
cies (MMD) (Szekely et al., 2005), the mathematically well-defined OT distance has advantageous analytical properties. For instance, OT is a distance metric, being computationally tractable and computable from finite samples (Genevay et al., 2018; Feydy et al., 2019).
The Kantorovich formulation (Kantorovich, 1942) defines the OT problem as a Linear Program (LP). Given probability measures µt, µv over the space Z , the OT problem is defined as OT(µt, µv) := minπ∈Π(µt,µv) ∫ Z2 C(z, z
′)dπ(z, z′) where Π(µt, µv) :={ π ∈ P(Z × Z) | ∫ Z π(z, z ′)dz = µt, ∫ Z π(z, z ′)dz′ = µv }
denotes a collection of couplings between two distributions µt and µv and C : Z × Z → R+ is some symmetric positive cost function (with C(z, z) = 0), respectively. If C(z, z′) is the Euclidean distance between z and z′ according to the distance metric d, then OT(µt, µv) is 2-Wasserstein distance, which we denote as WC(µt, µv) = Wd(µt, µv) := OT(µt, µv). In this work, the notation OT and W are used interchangeably, with a slight difference that we use OT to emphasize various of its formulations while W specifies on which distance metric it is computed.
Measuring Dataset Distance. We consider a multi-label setting where we denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, respectively, where V is the number of different labels. Given the training set Dt = {(xi, ft(xi))}Ni=1 of size N , and the validation set Dv = {(x′i, fv(x′i))}Mi=1 of size M , one can construct discrete measures µt(x, y) := 1 N ∑N i=1 δ(xi,yi) and µv(x, y) := 1 M ∑M i=1 δ(x′i,y′i), where δ is Dirac function. Consider that each datapoint consists of a feature-label pair (xi, yi) ∈ X × Y . While the Euclidean distance naturally provides the metric to measure distance between features, the distance between labels generally lacks a definition. Consequently, we define conditional distributions µt(x|y) := µt(x)I[ft(x)=y]∫ µt(x)I[ft(x)=y]dx and µv(x|y) := µv(x)I[fv(x)=y]∫ µv(x)I[fv(x)=y]dx . Inspired by Alvarez-Melis & Fusi (2020), we measure the distance between two labels in terms of the OT distance between the conditional distributions of the features given each label. Formally, we adopt the following cost function between featurelabel pairs: C((xt, yt), (xv, yv)) := d(xt, xv) + cWd(µt(·|yt), µv(·|yv)), where c ≥ 0 is a weight coefficient. We note that C is a distance metric since Wd is a valid distance metric. With the definition of C, we propose to measure the distance between the training and validation sets using the non-conventional, hierarchically-defined Wasserstein distance between the corresponding discrete measures: WC (µt, µv) = minπ∈Π(µt,µv) ∫ Z2 C (z, z ′) dπ (z, z′) .
Despite its usefulness and potentially broad applications, we note that it remains absent for existing research to explore its theoretical properties or establish applications upon this notion. This work aims to fill this gap by extending in both directions–novel analytical results are presented to provide its theoretical justifications while an original computing framework is proposed that extends its applications to a new scenario of datapoint valuation.
Computational Acceleration via Entropic Regularization. Solving the problem above scales cubically with MN , which is prohibitive for large datasets. Entropy-regularized OT (entropy-OT) becomes a prevailing choice for approximating OT distances as it allows for fastest-known algorithms. Using the iterative Sinkhorn algorithm (Cuturi, 2013) with almost linear time complexity and memory overhead, entropy-OT can be implemented on a large scale with parallel computing (Genevay et al., 2018; Feydy et al., 2019). Given a regularization parameter, ε > 0, entropy-OT can be formulated as OTε(µt, µv) := minπ∈Π(µt,µv) ∫ Z2 C(z, z
′)dπ(z, z′)+εH(π|µt⊗µv), where H(π|µt ⊗ µv) = ∫ Z2 log ( dπ dµtdµv ) dπ. As ε → 0, the dual solutions to the ε-entropy-OT converge to its OT counterparts as long as the latter are unique (Nutz & Wiesel, 2021).
2.2 LOWER Class-Wise Wasserstein Distance ENTAILS BETTER VALIDATION PERFORMANCE
In this paper, we propose to use WC , a non-conventional, class-wise Wasserstein distance w.r.t. the special distance function C defined in 2.1, as a learning-agnostic surrogate of validation performance to measure the utility of training data. Note that while Wasserstein distances have been frequently used to bound the learning performance change due to distribution drift (Courty et al., 2017; Damodaran et al., 2018; Shen et al., 2018; Ge et al., 2021), this paper is the first to bound the performance change by the hierarchically-defined Wasserstein distance with respect to the hybrid cost C. Figure 1 provides an empirical justification for using this novel distance metric as a proxy, and presents a relation between the class-wise Wasserstein distance and a model’s validation performance. Each curve represents a certain dataset trained on a specific model to receive its performance. Since,
each dataset is of different size and structure, their distances will be of different scale. Therefore, we normalize the distances to the same scale to present the relation between the Wasserstein distance and model performance, which shows that despite different datasets and models, with increased distance, the validation performance decreases.
The next theorem theoretically justifies using this Wasserstein distance as a proxy for validation performance of a model. With assumptions on Lipschitzness of the downstream model as well as the labeling functions associated with the training and validation sets (as explicated in Appendix A), we show that the discrepancy between the training and validation performance of a model is bounded by the hierarchically-defined Wasserstein distance between the training and the validation datasets.
Theorem 1. We denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, where V is the number of different labels. Let f : X → [0, 1]V be
the model trained on training data. By definitions, we have that ∥f(·)∥, ∥ft(·)∥, ∥fv(·)∥ ≤ V . Let µt, µv be the training and validation distributions, respectively, and let µt(·|y) and µv(·|y) be the corresponding conditional distributions given label y. Assume that the model f is ϵ-Lipschitz and the loss function L : {0, 1}V × [0, 1]V → R+ is k-Lipschitz in both inputs. Define cost function C between (xv, yv) and (xt, yt) as C((xt, yt), (xv, yv)) := d(xt, xv)+cWd(µt(·|yt), µv(·|yv)), where c is a constant. Under a certain cross-Lipschitzness assumption for ft and fv detailed in Appendix A, we have Ex∼µv(x) [L(fv(x), f(x))] ≤ Ex∼µt(x) [L(ft(x), f(x))] + kϵWC(µt, µv) +O(kV ).
Proofs are deferred to Appendix A. The bound is interesting to interpret. The first term on the right-hand side corresponds to the training performance. In practice, when a model with large enough capacity is used, this term is small. The second one is the exact expression of the Wasserstein distance that we propose to use as a proxy for validation performance. The last error term is due to possible violation of the cross-Lipschitzness assumption for ft and fv. This term will be small if ft and fv assign the same label to close features with high probability. If the last term is small enough, it is possible to use the proposed Wasserstein distance as proxy for validation loss provided that f , ft and fv verify the cross-Lipschitz assumptions. The bound resonates with the empirical observation in Figure 1 that with lower distance between the training and the validation data, the validation loss of the trained model decreases.
3 EFFICIENT VALUATION OF INDIVIDUAL DATAPOINTS
Note that the class-wise Wasserstein distance defined in the previous section can be used to measure the utility for subsets of Dt. Given this utility function, one can potentially use existing CGT-based notions such as the Shapley value to measure the contribution of individual points. However, even approximating these notions requires evaluating the utility function on a large number of subsets, which incurs large extra computation costs. In this section, we introduce a new approach to valuating individual points. Remarkably, our values can be directly obtained for free from the output of off-the-shelf optimization solvers once the proposed Wasserstein distance between the full training and testing datasets is computed.
3.1 DATAPOINT VALUATION VIA PARAMETER SENSITIVITY
OT distance is known to be insensitive to small differences while also being not robust to large deviations (Villani, 2021). This feature is naturally suitable for detecting abnormal datapoints— disregarding normal variations in distances between clean data while being sensitive to abnormal distances of outlying points. We propose to measure individual points’ contribution based on the gradient of the OT distance to perturbations on the probability mass associated with each point.
Gradients are local information. However, unlike widely used influence functions that only hold for infinitesimal perturbation (Koh & Liang, 2017), gradients for LP hold precisely in a local range and still encode partial information beyond that range, making it capable of reliably predicting the change to the OT distance due to adding or removing datapoints without the need of re-calculation. Also, the gradients are directed information, revealing both positive and negative contributions for each
datapoint and allowing one to perform ranking of datapoints based on the gradient values. Finally, the OT distance always considers the collective effect of all datapoints in the dataset.
Leveraging the duality theorem for LP, we rewrite the original OT problem (introduced in 2.1) in the equivalent form: OT(µt, µv) := max(f,g)∈C0(Z)2⟨f, µt⟩ + ⟨g, µv⟩, where C0(Z) is the set of all continuous functions, f and g are the dual variables. Let π∗ and (f∗, g∗) be the corresponding optimal solutions to the primal and dual problems. The Strong Duality Theorem indicates that OT(π∗(µt, µv)) = OT(f
∗, g∗), where the right-hand side is the distance parameterized by µt and µv. From the Sensitivity Theorem (Bertsekas, 1997), we have that the gradient of the distance w.r.t. the probability mass of datapoints in the two datasets can be expressed as follows: ∇µt OT(f∗, g∗) = (f∗)T , ∇µv OT(f∗, g∗) = (g∗)T . Note that the original formulation in 2.1 is always redundant as the constraint ∑N i=1 µt(zi) = ∑M i=1 µv(z ′ i) = 1 is already implied, rendering the dual solution to be non-unique. To address this issue, we first remove any one of the constraints in Π(µt, µv) and make the primal formulation non-degenerate. Then, we assign a value of zero to the dual variable corresponding to that removed primal constraint.
When measuring the gradients of the OT distance w.r.t. the probability mass of a given datapoint in each dataset, we calculate the calibrated gradient as
∂OT(µt, µv)
∂µt(zi) = f∗i − ∑ j∈{1,...N}\i f∗j N − 1 , ∂OT(µt, µv) ∂µv(z′i) = g∗i − ∑ j∈{1,...M}\i g∗j M − 1 , (1)
which represents the rate of change in the OT distance w.r.t the change of the probability mass of a given datapoint along the direction ensuring the probability mass for all datapoints in the dataset always sums up to one (explicitly enforcing the removed constraint). The value of calibrated gradients is independent of the choice of selection during the constraint removal.
Datapoint valuation via calibrated gradients. The calibrated gradients predict how the OT distance changes as more probability mass is shifted to a given datapoint. This can be interpreted as a measure of the contribution of the datapoint to the OT distance. The contribution can be positive or negative, suggesting shifting more probability mass to this datapoint would result in an increase or decrease of the dataset distance, respectively. If we want a training set to match the distribution of the validation dataset, then removing datapoints with large positive gradients while increasing datapoints with large negative gradients can be expected to reduce their OT distance. As we will show later, the calibrated gradients can provide a tool to detect abnormal or irrelevant data in various applications.
Radius for accurate predictions. The Linear Programming theories (Bertsimas & Tsitsiklis, 1997) give that for each non-degenerate optimal solution, we are always able to perturb parameters on the right-hand side of primal constraints (Π(µt, µv) in 2.1) in a small range without affecting the optimal solution to the dual problem. When the perturbation goes beyond a certain range, the dual solution becomes primal infeasible and the optimization problem needs to be solved again. Hence, the calibrated gradients are local information and we would like to know the perturbation radius such that the optimal dual solution remains unchanged—i.e., whether this range is large enough such that the calibrated gradients can accurately predict the actual change to the OT distance. If the perturbation goes beyond this range, the prediction may become inaccurate as the dual solution only encodes partial information about the optimization.
In our evaluation, we find that this range is about 5% to 25% of the probability measure of the datapoint (µ(·)(zi)) for perturbations in both directions and the pattern seems independent of the size of the datasets. This range being less than the probability mass of a datapoint suggests that we are only able to predict the change to the OT distance for removing/adding a datapoint to the dataset approximately, though, the relative error is well acceptable (depicted in Figure 2).
3.2 PRECISE RECOVERY OF RANKING FOR DATA VALUES OBTAINED FROM ENTROPY-OT
Due to computational advantages of the entropy-OT (defined in Eq. 2.1), one needs to resort to the solutions to entropy-OT to calculate data values. We quantify the deviation in the calibrated gradients caused by the entropy regularizer. This analysis provides foundations on the potential impact of the deviation on the applications built on these gradients. Theorem 2. Let OT(µt, µv) and OTε(µt, µv) be the original formulation and entropy penalized formulation (as defined in 2.1) for the OT problem between the empirical measures µt and µv associated with the two datasets Dt and Dv, respectively, where |Dt| = N and |Dv| = M . Then,
for any i ̸= j ̸= k ∈ {1, 2, . . . , N} and o ̸= p ̸= q ∈ {1, 2, . . . ,M}, the difference between the calibrated gradients for two datapoints zi and zk in dataset Dt and the difference for z′p and z′q in Dv can be calculated as ∂OT(µt, µv)
∂ µt(zi) − ∂OT(µt, µv) ∂ µt(zk) = ∂OTε(µt, µv) ∂ µt(zi) − ∂OTε(µt, µv) ∂ µt(zk) −ε· N N − 1 · ( 1 (π∗ε )kj − 1 (π∗ε )ij ) , (2)
∂OT(µt, µv) ∂ µv(z′p) −∂OT(µt, µv) ∂ µv(z′q) = ∂OTε(µt, µv) ∂ µv(z′p) −∂OTε(µt, µv) ∂ µv(z′q) −ε· M M − 1 · ( 1 (π∗ε )qo − 1 (π∗ε )po ) , (3)
where π∗ε is the optimal primal solution to the entropy penalized OT problem defined in 2.1, zj is any datapoint in Dt other than zi or zk, and z′o is any datapoint in Dv other than z′p or z′q .
The gradient difference on the left-hand side of (2) represents the groundtruth value difference between two training points zi and zk as the values are calculated based on the original OT formulation. In practice, for the sake of efficiency, one only solves the regularized formulation instead and, therefore, this groundtruth difference cannot be obtained directly. Theorem 2 nevertheless indicates a very interesting fact that one can calculate the groundtruth difference based on the solutions to the regularized problem, because every term in the right-hand side only depends on the solutions to the regularized problem. Particularly, the groundtruth value difference is equal to the value difference produced by the regularized solutions plus some calibration terms that scale with ε (Nutz & Wiesel, 2021). This result indicates that while it is not possible to obtain individual groundtruth value by solving the regularized problem, one can actually exactly recover the groundtruth value difference based on the regularized solutions. In many applications of data valuation such as data selection, it is the order of data values that matters (Kwon & Zou, 2021). For instance, to filter out low-quality data, one would first rank the datapoints based on their values and then throw the points with lowest values. In these applications, solving the entropy-regularized program is an ideal choice—which is both efficient and recovers the exact ranking of datapoint values. Finally, note that Eq. 3 presents a symmetric result for the calibrated gradients for validation data. In our experiments, we set ϵ = 0.1, rendering the corresponding calibration terms to be negligible. As a result, we can directly use the calibrated gradients solved by the regularized program to rank datapoint values.
4 EXPERIMENTS
In this section, we demonstrate the practical efficacy and efficiency of LAVA on various classification datasets. We compare with nine baselines: (1) Influence functions (INF) (Koh & Liang, 2017), which approximates the LOO error with first-order extrapolation; (2) TracIn-Clean (Pruthi et al., 2020), which accumulates the loss change on validation data during training whenever the training point of interest is sampled; (3) TracIn-Self (Pruthi et al., 2020), which is similar to TracIn-Clean but accumulates the training loss changes; (4) KNN-Shapley (KNN-SV) (Jia et al., 2019a), which
Backdoor Detection (5%) Poison Detection (0.1%) Noisy Features (25%) Noisy Labels (25%)
approximates the Shapley value using K-Nearest-Neighbor as a proxy model; and (5) Random, a setting where we select a random subset from the target dataset. We also consider the popular data valuation approaches: (6) Permutation Sampling-based Shapely value (Perm-SV) (Jia et al., 2019b), (7) Least Cores (LC) (Yan & Procaccia, 2021), (8) TMC-Shapley (TMC-SV) and (9) G-Shapley (G-SV) (Ghorbani & Zou, 2019). Baselines (6)-(9) are, however, computationally infeasible for the scale of data that we study here. So we exclude them from the evaluation of efficacy in different use cases. We also provide a detailed runtime comparison of all baselines. For all methods to be compared, a validation set of 10, 000 samples is assumed. For our method, we first use the validation data to train a deep neural network model PreActResNet18 (He et al., 2016) from scratch for feature extraction. Then, from its output, we compute the class-wise Wasserstein distance and the calibrated gradients for data valuation. Details about datasets, models, hyperparameter settings, and ablation studies of the hyperparameters and validation sizes are provided in Appendix B.
We evaluate on five different use cases of data valuation: detecting backdoor attack, poisoning attack, noisy features, mislabeled data, and irrelevant data. The first four are conventional tasks in the literature and the last one is a new case. All of them have a common goal of identifying “low-quality” training points. To achieve this goal, we rank datapoints in ascending order of their values and remove some number of points with lowest data values. For each removal budget, we calculate the detection rate, i.e., the percentage of the points that are truly bad within the removed points.
Backdoor Attack Detection. A popular technique of introducing backdoors to models is by injecting maliciously constructed data into a training set (Zeng et al., 2021). At test time, any trained model would misclassify inputs patched with a backdoor trigger as the adversarially-desired target class. In the main text, we consider the Trojan Square attack, a popular attack algorithm (Liu et al., 2017), which injects training points that contain a backdoor trigger and are relabeled as a target class. The evaluation of other types of backdoor attacks can be found in Appendix B. To simulate this attack, we select the target attack class Airplane and poison 2500 (5%) samples of the total CIFAR-10 training set (50k) with a square trigger. In Figure 3 I.(a), we compare the detection rates of different data valuation methods. LAVA and TracIn-Clean outperform the others by a large margin. In particular, for LAVA, the first 20% of the points that it removes contain at least 80% of the poisoned data. We also evaluate whether the model trained after the removal still suffers from the backdoor vulnerability. To perform this evaluation, we calculate the attack accuracy, i.e., the accuracy of the model trained on the remaining points to predict backdoored examples as the target label. A successful data removal would yield a lower attack accuracy. Figure 3 I.(b) shows that our method already takes effect in the early stages, whereas other baselines can start defending from the attack only after removing over 13, 000 samples. The efficacy of LAVA is in part attributable to inspection of distances between both features and labels. The backdoored training samples that are poisoned to the target class will be
“unnatural” in that class, i.e., they have a large feature distance from the original samples in the target class. While the poisoned examples contain a small feature perturbation compared to the natural examples from some other classes, their label distance to them is large because their labels are altered.
Poisoning Attack Detection. Poisoning attacks are similar to backdoor attacks in the sense that they both inject adversarial points into the training set to manipulate the prediction of certain test examples. However, poisoning attacks are considered unable to control test examples. We consider a popular attack termed “feature-collision” attack (Shafahi et al., 2018), where we select a target sample from the Cat class test set and blend the selected image with the chosen target class training samples, Frog in our case. In this attack, we do not modify labels and blend the Cat image only into 50 (0.1%) samples of Frog, which makes this attack especially hard to detect. During inference time, we expect the attacked model to consistently classify the chosen Cat as a Frog. In Figure 3 II.(a), we observe that LAVA outperforms all baselines and achieves an 80% detection rate by removing only 11k samples, which is around 60% fewer samples than the highest baseline. Figure 3 II.(b) shows that by removing data according to LAVA ranking, the target model has reduced the confidence of predicting the target Cat sample as a Frog to below 40%. Our technique leverages the fact that the features from a different class are mixed with the features of the poisoned class, which increases the feature distance between the poisoned and non-poisoned Frog examples.
Noisy Feature Detection. While adding small Gaussian noises to training samples may benefit model robustness (Rusak et al., 2020), strong noise, such as due to sensor failure, can significantly affect the model performance. We add strong white noise to 25% of all CIFAR-10 dataset without changing any labels. Our method performs extremely well as shown in Figure 3 III.(a) and detects all 12,500 noisy samples by inspecting less than 15,000 samples. This explains the sudden drop of the model’s accuracy at the removal budget of 15,000 samples in Figure 3 III.(b): the model starts throwing away only clean samples from that point. LAVA performs well in this scenario since the strong noise increases the feature distance significantly.
Mislabeled Data Detection. Due to the prevalence of human labeling errors (Karimi et al., 2020), it is crucial to detect mislabeled samples. We shuffle labels of 25% samples in the CIFAR-10 dataset to random classes. Unlike backdoor and poisoning attacks, this case is especially harder to detect since wrong samples are spread out throughout classes instead of all placed inside a target class. However, as shown in Figure 3 IV.(a), LAVA’s detection rate outperforms other base-
lines and the model performance is maintained even after 20k of removed data (Figure IV.(b)).
Irrelevant Data Detection. Often the collected datasets through web scraping have irrelevant samples in given classes (Northcutt et al., 2021; Tsipras et al., 2020), e.g., in a class of Glasses, we might have both water glass and eyeglasses due to lack of proper inspection or class meaning specification. This case is different from the mislabeled data scenario, in which case the training features are all relevant to the task. Since the irrelevant examples are highly likely to have completely different features than the desired class representation, LAVA is expected to detect these examples. We design an experiment where we remove all images of one specific class from the classification output but split them equally to the other remaining classes as irrelevant images. As shown in Figure 4, the detection result over a class varies based on the distance between that class and the class from which irrelevant images are drawn. For instance, when Deer images are placed into the Truck class, we can detect almost 94% of all Deer images within first 500 removed images. On the other hand, when we place Cat images into dog class, our detection rate drops to 45% within the top 500.
Computational Efficiency. So far, we have focused on the method’s performance without considering the actual runtime. We compare the runtime-performance tradeoff on the CIFAR-10 example of 2000 samples with 10% backdoor data, a scale in which every baseline can be executed in a reasonable time. As shown in Figure 5, our method achieves a significant improvement in efficiency while being able to detect bad data more effectively.
Dependence on Validation Data Size. For current experiments, we have assumed the validation set of size 10K. Such a scale of data is not hard to acquire, as one can get high-quality data from crowdsourcing platforms, such as Amazon Mechanical Turk for $12 per each 1K samples (AWS, 2019). While our method achieves remarkable performance when using 10K validation data, we perform ablation study on much smaller sets (Appendix B.2.1), where LAVA, notably, can still outperform other baselines. As an example on mislabeled data detection, our method with 2K validation data achieves 80% detection rate at data removal budget of 25K (Fig. 9), whereas the best performing baseline achieves such a performance with 5 times bigger validation data, 10K (Fig. 3 IV.(a)). Furthermore, even on a tiny validation set of size 500, LAVA consistently outperforms all the baselines with the same validation size (Fig. 11). This shows that our method remains effective performance for various sizes of validation data.
5 RELATED WORK
Existing data valuation methods include LOO and influence function (Koh & Liang, 2017), the Shapley value (Jia et al., 2019b; Ghorbani & Zou, 2019; Wang & Jia, 2023), the Banzhaf value (Wang & Jia, 2022), Least Cores (Yan & Procaccia, 2021), Beta Shapley (Kwon & Zou, 2021), and reinforcement learning-based method (Yoon et al., 2020). However, they all assume the knowledge of the underlying learning algorithms and suffer large computational complexity. The work of Jia et al. (2019a) has proposed to use K-Nearest Neighbor Classifier as a default proxy model to perform data valuation. While it can be thought of as a learning-
agnostic data valuation method, it is not as effective and efficient as our method in distinguishing data quality. Xu et al. (2021) propose to use the volume to measure the utility of a dataset. Volume is agnostic to learning algorithms and easy to calculate because is defined simply as the square root of the trace of feature matrix inner product. However, the sole dependence on features makes it incapable of detecting bad data caused by labeling errors. Moreover, to evaluate the contribution of individual points, the authors propose to resort to the Shapley value, which would still be expensive for large datasets.
6 DISCUSSION AND OUTLOOK
This paper describes a learning-agnostic data valuation framework. In particular, in contrast to existing methods which typically adopt model validation performance as the utility function, we approximate the utility of a dataset based on its class-wise Wasserstein distance to a given validation set and provide theoretical justification for this approximation. Furthermore, we propose to use the calibrated gradients of the OT distance to value individual datapoints, which can be obtained for free if one uses an off-the-shelf solver to calculate the Wasserstein distance. Importantly, we have tested on various datasets, and our LAVA framework can significantly improve the state-of-the-art performance of using data valuation methods to detect bad data while being substantially more efficient. Due to the stochasticity of ML and the inherent tolerance to noise, it is often challenging to identify low-quality data by inspecting their influence on model performance scores. The take-away from our empirical study is that despite being extensively adopted in the past, low-quality data detection through model performance changes is actually suboptimal; lifting the dependence of data valuation on the actual learning process provides a better pathway to distinguish data quality.
Despite the performance and efficiency improvement, our work still has some limitations. As a result, it opens up many new investigation venues: (1) How to further lift the dependence on validation data? While a validation set representative of the downstream learning task is a common assumption in the ML literature, it may or may not be available during data exchange. (2) Our design could be vulnerable to existing poisons that directly or indirectly minimize the similarity to clean data (Huang et al., 2021; Pan et al., 2022). Further investigation into robust data valuation would be intriguing. (3) Our current method does not have enough flexibility for tasks that aim for goals beyond accuracy, e.g., fairness. Folding other learning goals in is an exciting direction. (4) Customizing the framework to natural language data is also of practical interest.
7 ACKNOWLEDGEMENTS
RJ and the ReDS Lab gratefully acknowledge the support from the Cisco Research Award, the Virginia Tech COE Fellowship, and the NSF CAREER Award. Jiachen T. Wang is supported by Princeton’s Gordon Y. S. Wu Fellowship. YZ is supported by the Amazon Fellowship.
APPENDIX A RESTATEMENT OF THEOREMS AND FULL PROOFS
In this section, we will restate our main results and give full proofs.
A.1 SUMMARY OF NOTATIONS
Let µt, µv be the training distribution and validation distribution, respectively. We denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, where V is the number of different labels. We can then denote the joint distribution of random datalabel pairs (x, ft(x))x∼µt(x) and (x, fv(x))x∼µv(x) as µ ft t and µfvv , respectively, which are the same notations as µt and µv but made with explicit dependence on ft and fv for clarity. The distributions of (ft(x))x∼µt(x), (fv(x))x∼µv(x) are denoted as µft , µfv , respectively. Besides, we define conditional distributions µt(x|y) := µt(x)I[ft(x)=y]∫ µt(x)I[ft(x)=y]dx and µv(x|y) := µv(x)I[fv(x)=y]∫ µv(x)I[fv(x)=y]dx . Let f : X → [0, 1]V be the model trained on training data and L : {0, 1}V × [0, 1]V → R+ be the loss function. We denote π ∈ Π(µ1, µ2) as a coupling between a pair of distributions µ1, µ2 and d : X × X → R as a distance metric function. The 1-Wasserstein distance with respect to distance function d between two distributions µ1, µ2 is defined as Wd(µ1, µ2) := infπ∈Π(µ1,µ2) E
(x,y)∼π [d(x, y)]. More generally, the 1-Wasserstein
distance with respect to cost function C is defined as WC(µ1, µ2) := infπ∈Π(µ1,µ2) E (x,y)∼π [C(x, y)].
A.2 STATEMENT OF ASSUMPTIONS
To prove Theorem 1, we need the concept of probabilistic cross-Lipschitzness, which assumes that two labeling functions should produce consistent labels with high probability on two close instances. Definition 3 (Probabilistic Cross-Lipschitzness). Two labeling functions ft : X → {0, 1}V and fv : X → {0, 1}V are (ϵ, δ)-probabilistic cross-Lipschitz w.r.t. a joint distribution π over X × X if for all ϵ > 0:
P(x1,x2)∼π[∥ft(x1)− fv(x2)∥ > ϵd(x1, x2)] ≤ δ. (4)
Intuitively, given labeling functions ft, fv and a coupling π, we can bound the probability of finding pairs of training and validation instances labelled differently in a (1/ϵ)-ball with respect to π.
Our Assumptions. Assuming that f is an ϵ-Lipschitz function. Given a metric function d(·, ·), we define a cost function C between (xt, yt) and (xv, yv) as
C((xt, yt), (xv, yv)) := d(xt, xv) + cWd(µt(·|yt), µv(·|yv)), (5)
where c is a constant. Let π∗x,y be the coupling between µ ft t , µ fv v such that
π∗x,y := arg inf π∈Π(µftt ,µ fv v ) E((xt,yt),(xv,yv))∼π[C((xt, yt), (xv, yv))]. (6)
We define two couplings π∗ and π̃∗ between µt(x), µv(x) as follows:
π∗(xt, xv) := ∫ Y ∫ Y π∗x,y((xt, yt), (xv, yv)) dytdyv. (7)
For π̃∗, we first need to define a coupling between µft , µfv :
π∗y(yt, yv) := ∫ X ∫ X π∗x,y((xt, yt), (xv, yv)) dxtdxv (8)
and another coupling between µftt , µ fv v :
π̃∗x,y((xt, yt), (xv, yv)) := π ∗ y(yt, yv)µt(xt|yt)µv(xv|yv). (9)
Finally, π̃∗ is constructed as follows:
π̃∗(xt, xv) := ∫ Y ∫ Y π∗y(yt, yv)µt(xt|yt)µv(xv|yv) dytdyv. (10)
It is easy to see that all joint distributions defined above are couplings between the corresponding distribution pairs.
We assume that ft, fv are (ϵtv, δtv)-probabilistic cross-Lipschitz with respect to π̃∗ in metric d. Additionally, we assume that ϵtv/ϵ ≤ c and the loss function L is k-Lipschitz in both inputs. Besides, from their definitions above, we have that ∥f(x)∥, ∥ft(x)∥, ∥fv(x)∥ ≤ V . The assumption of probabilistic cross-Lipschitzness would be violated only when the underlying coupling assigns large probability to pairs of training-validation features that are close enough (within 1/ϵtv-ball) but labeled differently. However, π̃∗ is generally not such a coupling. Note that π∗ is the optimal coupling between training and validation distributions that minimizes a cost function C pertaining to both feature and label space. Hence, π∗y(yt, yv), the marginal distribution of π
∗ over the training and validation label space, tends to assign high probability to those label pairs that agree. On the other hand, π̃∗x,y can be thought of as a coupling that first generates training-validation labels from π∗y and then generates the features in each dataset conditioning on the corresponding labels. Hence, the marginal distribution π̃∗ of training-validation feature pairs generated by π̃∗x,y would assign high likelihood to those features with the same labels. So, conceptually, the probabilistic cross-Lipschitzness assumption should be easily satisfied by π̃∗.
A.3 DETAILED PROOF
Theorem 1 (restated). Given the above assumptions, we have
Ex∼µv(x) [L(fv(x), f(x))] ≤ Ex∼µt(x) [L(ft(x), f(x))] + kϵWC(µ ft t , µ fv v ) + 2kV δtv. (11)
Proof.
Ex∼µv(x)[L(fv(x), f(x))] (12) = Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))] + Ex∼µt(x)[L(ft(x), f(x))] (13) ≤ Ex∼µt(x)[L(ft(x), f(x))] +
∣∣Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))]∣∣ . (14) We bound
∣∣Ex∼µv(x) [L(fv(x), f(x))]− Ex∼µt(x) [L(ft(x), f(x))]∣∣ as follows: ∣∣Ex∼µv(x) [L(fv(x), f(x))]− Ex∼µt(x) [L(ft(x), f(x))]∣∣ (15) =
∣∣∣∣∫ X2 [L(fv(xv), f(xv))− L(ft(xt), f(xt))] dπ∗(xt, xv) ∣∣∣∣ (16)
= ∣∣∣∣∫ X2 [L(fv(xv), f(xv))− L(fv(xv), f(xt)) + L(fv(xv), f(xt))− L(ft(xt), f(xt))] dπ∗(xt, xv) ∣∣∣∣ (17)
≤ ∫ X2
|L(fv(xv), f(xv))− L(fv(xv), f(xt))| dπ∗(xt, xv)︸ ︷︷ ︸ U1
(18)
+ ∫ X2
|L(fv(xv), f(xt))− L(ft(xt), f(xt))| dπ∗(xt, xv)︸ ︷︷ ︸ U2 , (19)
where the last inequality is due to triangle inequality.
Now, we bound U1 and U2 separately. For U1, we have U1 ≤ k ∫ X 2 ∥f(xv)− f(xt)∥ dπ∗(xt, xv) (20)
≤ kϵ ∫ X 2 d(xt, xv) dπ ∗(xt, xv), (21)
where both inequalities are due to Lipschitzness of L and f . In order to bound U2, we first recall that π∗y(yt, yv) = ∫ X ∫ X π ∗ x,y((xt, yt), (xv, yv)) dxtdxv and π̃∗x,y((xt, yt), (xv, yv)) := π ∗ y(yt, yv)µt(xt|yt)µv(xv|yv):
Observe that
U2 = ∫ X 2 ∫ Y2 |L(fv(xv), f(xt))− L(ft(xt), f(xt))| dπ∗x,y((xt, yt), (xv, yv)) (22)
= ∫ Y2 ∫ X 2 |L(yv, f(xt))− L(yt, f(xt))| dπ∗x,y((xt, yt), (xv, yv)) (23)
≤ k ∫ Y2 ∫ X 2 ∥yv − yt∥ dπ∗x,y((xt, yt), (xv, yv)) (24)
= k ∫ Y2 ∥yv − yt∥ dπ∗y(yt, yv), (25)
where the second equality is due to a condition that if yt ̸= ft(xt) or yv ̸= fv(xv), then π∗x,y((xt, yt), (xv, yv)) = 0.
Now we can bound U2 as follows: U2 ≤ k ∫ Y2 ∥yv − yt∥ dπ∗y(yt, yv) (26)
= k ∫ X 2 ∫ Y2 ∥yv − yt∥ dπ̃∗x,y((xt, yt), (xv, yv)) (27)
= k ∫ Y2 ∫ X 2 ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)), (28)
where the last step holds since if yt ̸= ft(xt) or yv ̸= fv(xv) then π̃∗x,y((xt, yt), (xv, yv)) = 0.
Define the region A = {(xt, xv) : ∥fv(xv)− ft(xt)∥ < ϵtvd(xt, xv)}, then
k ∫ Y2 ∫ X 2 ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (29)
= k ∫ Y2 ∫ X 2\A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (30)
+ k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (31)
≤ k ∫ Y2 ∫ X 2\A 2V dπ̃∗x,y((xt, yt), (xv, yv)) (32)
+ k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)). (33)
Let’s define f̃t(xt) = ft(xt) and f̃v(xv) = fv(xv) if (xt, xv) ∈ A, and f̃t(xt) = f̃v(xv) = 0 otherwise (note that ∥f̃v(xv) − f̃t(xt)∥ ≤ ϵtvd(xt, xv) for all (xt, xv) ∈ X 2), then we can bound the second term as follows:
k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (34)
≤ k ∫ Y2 dπ∗y(yt, yv) ∫ A ∥fv(xv)− ft(xt)∥ dµt(xt|yt)dµv(xv|yv) (35)
= k ∫ Y2 dπ∗y(yt, yv) ∫ X 2 ∥∥∥f̃v(xv)− f̃t(xt)∥∥∥ dµt(xt|yt)dµv(xv|yv) (36) = k
∫ Y2 dπ∗y(yt, yv) ∥∥∥Exv∼µv(·|yv)[f̃v(xv)]− Ext∼µv(·|yt)[f̃t(xt)]∥∥∥ (37)
≤ kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)). (38)
Inequality (38) is a consequence of the duality form of the Kantorovich-Rubinstein theorem (Villani (2021), Chapter 1).
Combining two parts, we have U2 ≤ k ∫ Y2 ∫ X 2\A 2V dπ̃∗x,y((xt, yt), (xv, yv)) (39)
+ kδtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)) (40)
≤ 2kV δtv + kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)), (41)
where the last step is due to the probabilistic cross-Lipschitzness of ft, fv with respect to π̃∗x,y .
Now, combining the bound for U1 and U2, we have
Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))] (42) ≤ kϵ ∫ X 2 d(xt, xv)dπ(xt, xv) + 2kV δtv + kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)) (43)
= k ∫ (X×Y)2 [ϵd(xt, xv) + ϵtvWd(µt(·|yt), µv(·|yv))] dπ∗x,y((xt, yt), (xv, yv)) + 2kV δtv (44)
≤ k ∫ (X×Y)2 [ϵd(xt, xv) + cϵWd(µt(·|yt), µv(·|yv))] dπ∗x,y((xt, yt), (xv, yv)) + 2kV δtv (45)
= kϵEπ∗x,y [C((xt, yt), (xv, yv))] + 2kV δtv (46)
= kϵWC(µ ft t , µ fv v ) + 2kV δtv, (47)
where the last step is due to the definition of π∗x,y . This leads to the final conclusion.
Theorem 5 (restated). Let OT(µt, µv) and OTε(µt, µv) be the original formulation and entropy penalized formulation (as defined in Subsection 2.1) for the OT problem between the empirical measures µt and µv associated with the two data sets Dt and Dv, respectively. Then, for any i ̸= j ̸= k ∈ {1, 2, ...N} and o ̸= p ̸= q ∈ {1, 2, ...M}, the difference between the calibrated gradients for two datapoints zi and zk in dataset Dt and the difference for z′p and z ′ q in Dv can be calculated as
∂OT(µt, µv) ∂ µt(zi) − ∂OT(µt, µv) ∂ µt(zk) = ∂OTε(µt, µv) ∂ µt(zi) − ∂OTε(µt, µv) ∂ µt(zk) − ε · N N − 1 · ( 1 (π∗ε )kj − 1 (π∗ε )ij ) ,
∂OT(µt, µv) ∂ µv(z′p) − ∂OT(µt, µv) ∂ µv(z′q) = ∂OTε(µt, µv) ∂ µv(z′p) − ∂OTε(µt, µv) ∂ µv(z′q) − ε · M M − 1 · ( 1 (π∗ε )oq − 1 (π∗ε )op ) ,
where π∗ε is the optimal primal solution to the entropy penalized OT problem, zj is any datapoint in Dt other than zi or zk, z′o is any datapoint in Dv other than z ′ p or z ′ q , |Dt| = N , and |Dv| = M .
Proof. Let L(π, f, g) and Lε(πε, fε, gε) be the Lagrangian functions for original formulation and entropy penalized formulation between the datasets Dt and Dv , respectively, which can be written as
L(π, f, g) = ⟨π, c⟩+ N∑ i=1 fi · (π′i · IN − ai) + M∑ j=1 gj · (I ′M · πj − bj),
Lε(πε, fε, gε) = ⟨πε, c⟩+ ε · N∑ i=1 M∑ j=1 log (πε)ij µt(zi) · µv(zj) + N∑ i=1 (fε)i · [(πε)′i · IM − µt(zi))]
+ M∑ j=1 (gε)j · [I ′N · (πε)j − µv(zj)],
where cN×M is the cost matrix consisting of distances between N datapoints in Dt and M datapoints in Dv, IN = (1, 1, ...1) ∈ RN×1 and I ′M = (1, 1, ...1)T ∈ R1×M , π and (f, g) denote the primal and dual variables, and π′i and πj denote the i th row and jth column in matrix π, respectively.
The first-order necessary condition for optima in Lagrangian Multiplier Theorem
gives that ∇Lπ(π∗, f∗, g∗) = 0 and ∇(Lε)π((πε)∗, (fε)∗, (gε)∗) = 0, where π∗ and (f∗, g∗) denote the optimal solutions to the primal and dual problems, respectively. Thus, for any i ∈ {1, 2, . . . , N} and j ∈ {1, 2, . . . ,M}, we have
∇Lπ(π∗, f∗, g∗)ij = cij + f∗i + g∗j = 0,
∇(Lε)π(π∗ε , f∗ε , g∗ε )ij = cij + ε · 1
(π∗ε )ij + (fε)
∗ i + (gε) ∗ j = 0.
Subtracting, we have
[f∗i − (fε)∗i ] + [ g∗j − (gε)∗j ] − ε · 1
(π∗ε )ij = 0.
Then, for ∀k ̸= i ∈ {1, 2, ...N}, we have
[f∗k − (fε)∗k] + [ g∗j − (gε)∗j ] − ε · 1
(π∗ε )kj = 0.
Subtracting and reorganizing, we get
[(fε) ∗ i − (fε)∗k] = (f∗i − f∗k )− ε ·
[ 1
(π∗ε )ij − 1 (π∗ε )kj
] .
From the definition of the calibrated gradients in Eq.1, we have
∂OT(µt, µv) ∂µt(zi) − ∂OT(µt, µv) ∂µt(zk) = N N − 1 (f∗i − f∗k ) ,
∂OTε(µt, µv) ∂µt(zi) − ∂OTε(µt, µv) ∂µt(zk) = N N − 1 [(fε) ∗ i − (fε)∗k] .
Finally, subtracting and reorganizing, we have
∂OTε(µt, µv) ∂µt(zi) − ∂OTε(µt, µv) ∂µt(zk) = ∂OT(µt, µv) ∂µt(zi) − ∂OT(µt, µv) ∂µt(zk) − ε · N N − 1 · [ 1 (π∗ε )ij − 1 (π∗ε )kj ] .
The proof for the second part of the Theorem is similar.
∂OTε(µt, µv) ∂µv(z′p) − ∂OTε(µt, µv) ∂µv(z′q) = ∂OT(µt, µv) ∂µv(z′p) − ∂OT(µt, µv) ∂µv(z′q) − ε · M M − 1 · [ 1 (π∗ε )op − 1 (π∗ε )oq ] .
Then the proof is complete.
APPENDIX B ADDITIONAL EXPERIMENTAL RESULTS
B.1 EVALUATING DATA VALUATION USE CASES ON DIVERSE DATASETS
(a) (b)
In the main text, we have focused our evaluation on CIFAR-10. Here, we provide experiments to show effectiveness of LAVA on diverse datasets for detecting bad data.
Backdoor Attack Detection. We evaluate another type of backdoor attack (Section 4), which is the Hello Kitty blending attack (Blend) (Chen et al., 2017) that mixes the target class sample with the Hello Kitty image, as illustrated in Figure 8 (B). We attack the German Traffic Sign dataset (GTSRB) on the target class 6 by poisoning 1764 (5%) samples of the whole dataset. Our method achieves the highest detection rate, as shown in Figure 6(a). In particular, the 5000 points with lowest data values contain all poisoned data based on the LAVA data values, while the second best method on this task, KNN-SV, can cover all poisoned examples with around 11,000 samples. Our algorithm performs especially well for this attack, since the label of poisoned data is changed to the target class and the patching trigger is large. Both the label and feature changes contribute to the increase of the OT distance and thus ease the detection.
Noisy Feature Detection. Here, we show the usage of LAVA on the MNIST dataset where 25% of the whole dataset is contaminated by feature noise. Our method still outperforms all the baselines by detecting all noisy data within first 14,000 samples, which is 5,000 less than the best baseline would require, which is shown in Figure 6(b).
Figure 7: Visualization of irrelevant data detection within the CIFAR100 dataset. The left column is one example of the target class and the images on the right columns are selected irrelevant data in the corresponding classes detected by LAVA.
Irrelevant Data. We perform another irrelevant data detection experiment and focus on the CIFAR100 dataset. In Figure 7, we illustrate some of the irrelevant samples detected by LAVA. Intuitively, irrelevant data in the class should be easily detected by LAVA, since the images are far from the representative of the class and increasing the probability mass associated with these images leads to larger distributional distance to the clean validation data.
B.2 ABLATION STUDY
We perform an ablation study on validation size and on the hyperparameters in our method, where we provide insights on the impact of setting changes. We use the mislabeled detection use case and the CIFAR-10 dataset as an example setting for the ablation study.
B.2.1 VALIDATION SIZE
Figure 8: Visualization of each backdoor attack: A) Trojan-SQ attack. B) Blend attack. C) Trojan-WM attack.
For all the experiments in the main text, we use the validation set of size 10,000. Naturally, we want to examine the effect of the size of the validation set on the detection rate of mislabeled data. In Figure 9 (c), we illustrate the performance on the detection rate with smaller validation data sizes: 200, 500, 2, 000, and 5, 000. We observe that even reducing the validation set by half to 5, 000 can largely maintain the detection rate performance. Small validation sets (200, 500, 2, 000) degrade the detection rate by more than 50%. Despite the performance degradation, our detection performance with these small validation sizes is in fact comparable with the baselines in Figure 3 IV.(a) that
leverage the full validation size of 10, 000. Additionally, when restricting LAVA and other baselines to validation set of 500 samples, our method is better than the best baseline for detecting mislabeled data in the 50k CIFAR-10 samples with 25% being mislabeled as shown in Figure 11.
B.2.2 FEATURE WEIGHT
Recall the class-wise Wasserstein distance is defined with respect to the following distance metric: C((xt, yt), (xv, yv)) = d(xt, xv)+ cWd(µt(·|yt), µv(·|yv)). Actually, one can change the relative weight between feature distance d(xt, xv) and the label distance Wd(µt(·|yt), µv(·|yv)). Here, we show the effect of upweighting the feature distance, while keeping the label weight at 1 and the results are illustrated in Figure 9 (a). As we are moving away from uniform weight, the performance on detection rate is decreasing with larger feature weights. With feature weight of 100, our method performs similarly as the random detector. Indeed, as we increase weight on the features, the weight on the label distance is decreased. As the weight reaches 100, our method performs similarly as the feature embedder without knowing label information and hence, the mislabeled detection performance is comparable to the random baseline.
B.2.3 LABEL WEIGHT
Next, we shift focus to label weight. We examine the effect of upweighting the label distance, while keeping the feature weight at 1. In Figure 9 (b), as the label weight increases, the detection rate performance deteriorates. When we increase the label distance, the feature information becomes neglected, which is not as effective as the balanced weights between feature and label distances.
B.2.4 FEATURE EMBEDDER
We use feature embedder to extract features for the feature distance part in our method. We train the feature embedder on the accessible validation set until the convergence on the train accuracy. Different architectures of the embedder might be sensitive to different aspects of the input and thus result in different feature output. Nevertheless, as we observe in Figure 10, the detection performance associated with different model architectures of feature embedder is similar. Hence, in practice, one can flexibly choose the feature embedder to be used in tandem with our method as long as it has large enough capacity. Furthermore, we note that that these feature embedders
have not learned the clean distribution from the validation data, e.g. in CIFAR-10 the model trained on 10K validation data achieves only around 65% accuracy on 50K clean datapoints and the model trained on 500 validation data achieves around 25% accuraracy. We additionally show in Figure 14,15 that our method significantly outperforms the PreActResNet18 model trained directly on validation data of size 500 and 10K in detecting bad data, which clearly distinguishes LAVA from simple feature embedders.
B.3 BALANCING UNBALANCED DATASET
Although machine leaning practitioners might be using clean data for training a model, the dataset can be often unbalanced which can lead to model performance degradation (Thai-Nghe et al., 2009). To recover higher model accuracy, we can rebalance unbalanced datasets by removing points that cause such disproportion. We showcase how LAVA effectively rebalance the dataset by removing points with poor values and keeping points with best values. We consider a CIFAR-10 dataset with a class Frog being unbalanced and containing 5, 000 samples while other classes have only half as much (i.e. 2, 500 samples). In Figure 12, we demonstrate the effectiveness of LAVA valuation which not only reduces the dataset by removing poor value points but also improves the model accuracy. While at the same time other valuation methods were not able to steadily increase the model accuracy and quickly downgraded the model performance, which in turn shows an even stronger effectiveness of our method.
B.4 REDUCING TRAINING SET SIZE
With the growing size of the training dataset, the computation cost and memory overhead naturally increase which might deem impos-
sible for some practitioners with limited resources to train a model. Therefore, the ability to reduce the training dataset size (Sener & Savarese, 2018) will free up some computation burden and thus allow ones with limited resources to fully appreciate the model training process. Motivated by the given challenge, we want to leverage our data valuation method to significantly decrease the training dataset size while maintaining the model performance. Similarly as in the previous section, the idea is to keep a subset of datapoints with best values and remove poor valued ones. To demonstrate the effectiveness of our LAVA’s valuation, we perform such a task on a clean CIFAR-10 dataset with 2, 500 samples from each class and compare with other data valuation methods. As presented in Figure 13, it is demonstrated that the performance is well maintained even with smaller subsets of the original dataset. Remarkably, even reducing a clean training set (25,000 samples) by 15% based on our method’s valuation, the performance can still stay relatively high while outperforming other valuation baselines.
B.5 DATA SUMMARIZATION
With growing dataset sizes, grows the space needed to store data. Thus, the buyer often would like to decrease the dataset to minimize resources but to retain the performance. Unlike reducing training set size as provided in Section B.4, in this experiment, we will select a smaller, representative subset of the whole dataset that can maintain good performance. To measure the performance of each subset, we measure the validation performance of the model trained on that subset subtracted by the validation performance of the model trained on a random subset of the same size, the experiment which is performed in Kwon & Zou (2021). In Figure 16we can observe that our method can select a small subset that performs better than the subsets chosen by the baseline methods most of the time.
B.6 SCALABILITY EXPERIMENT
In the main paper, we have demonstrated time complexity comparison between LAVA and other valuation methods. We have reported runtime comparisons only for 2,000 test samples as this is the scale existing methods can solve in a not excessively long time (within a day). It showcases the advantageous computing efficiency that the proposed approach enjoys over other methods. We further want to emphasize the computational efficiency of LAVA and demonstrate computation efficiency on a larger scale dataset (100,000 samples) with higher dimensions, ImageNet-100. Additionally, we evaluate
other baselines which are able to finish within a day of computation to highlight the advantage of our method as presented in Table 1. Moreover, we highlight the near-linear time complexity of LAVA on CIFAR-10, which shows practical computation efficiency of our method as shown in Figure 17.
B.7 GENERALIZATION TO OTHER TYPES OF BACKDOOR ATTACKS
As we have provided the results of the Trojan square attack (TrojanSQ) (Liu et al., 2017) in Section 4, we now apply LAVA to other backdoor attacks, which are Hello Kitty blending attack (Blend) (Chen et al., 2017) and Trojan watermark attack (Trojan-WM) (Liu et al., 2017), and evaluate the efficacy of our method in detecting different types of backdoor attacks. We simulate these attacks by selecting the target class Airplane and poisoning 2, 500 (5%) samples of the CIFAR-10 dataset of size 50, 000. The backdoor trigger adopted in each attack is portrayed in Figure 8. In Figure 18, we observe that our method can achieve superior detection performance on all the attacks considered. The reason is that despite the difference in trigger pattern, all of these attacks modify both the label and the feature of a poisoned image and thus result in the deviation of our distributional distance that is defined over the product space of feature and label.
B.8 IMPLICATIONS OF THE PROPOSED DATA VALUATION METHOD TO REAL-WORLD DATA MARKETPLACES
One concern in the real-world data marketplace is that data is freely replicable. However, replicates of data introduce no new information and therefore the prior work has argued that a data utility function should be robust to direct data copying (Xu et al., 2021). One advantage of using the class-wise Wasserstein distance to measure data utility is that it is robust to duplication. Our method by its natural distributional formulation will ignore duplicates sets. As shown in Table 3, although we have repeated the set even five times more than the original source set, the distance remains the same. Additionally, with small noise
changes in the features, the distance metric is barely affected. Another concern in the real-world marketplace is that one might find a single data that has highest contribution and duplicate it to maximize the profit. However, again due to the nature of our distributional formulation, duplicating a single point multiple times would increase the distance between the training and the validation set due to the imbalance in training distribution caused by copying that point.
B.9 DETAILED EXPERIMENTAL SETTINGS
Datasets and Models. Table 2 summarizes the details of the dataset, the models, as well as their licenses adopted in our experiments.
Hardware. A server with an NVIDIA Tesla P100-PCIE-16GB graphic card is used as the hardware platform in this work.
Software. Dataset OT Dist
For our implementation, we use PyTorch for the main framework (Paszke et al., 2019), assisted by three main libraries, which are otdd (optimal transport calculation setup with datasets) (Alvarez-Melis & Fusi, 2020), geomloss (actual optimal transport calculation) (Feydy et al., 2019), and numpy (tool for array routines) (Harris et al., 2020). | 1. What is the main contribution of the paper regarding datapoint valuation?
2. How does the proposed method incorporate label information into the dataset distance design?
3. Can you provide more explanations on the connection between OT distance sensitivity and individual datapoint value?
4. How does the paper address the limitations of LAVA in the conclusion?
5. What are the weaknesses of the paper regarding its motivation and connections with existing works?
6. How does LAVA perform in data summarization tasks?
7. Can you clarify the calculation of the actual change in the OT distance in Section 3.1?
8. Why is the feature extractor trained with the validation dataset? Will this introduce any form of bias?
9. Can you explain the abnormal behavior of baseline methods in Figure 3?
10. Any explanation for LAVA's performance in Poisoning Attack Detection experiments?
11. How do you select or manipulate datasets to create different Wasserstein distances to the validation set?
12. How does the model directly trained on the validation dataset detect bad data? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a datapoint valuation method, LAVA, that does not require a pre-defined learning algorithm, which is a common assumption in the existing literature. It utilizes the Wasserstein distance between the training set and the validation set with respect to a hybrid cost that considers both feature and label distances. The authors prove that the distance has theoretical connections with the validation performance (i.e., the utility) and show that the distance can be efficiently calculated using existing solvers. With that, the authors propose to use calibrated gradient to account for a datapoint’s contribution to the distance and thus its value. The paper has done extensive empirical experiments on 5 important application scenarios and demonstrated superior performances. Ablation studies are also done to better understand the behaviors of the proposed method.
Strengths And Weaknesses
Advantages:
The paper is well-written and easy to follow. The authors additionally give lots of interpretations after results or formulations to help with the understanding.
Label information is incorporated into the dataset distance design.
The theoretical part of the paper is sound. Also, the connection from OT distance sensitivity to individual datapoint value is neat and natural.
The author provides rather extensive empirical validations of the method and showed state-of-the-art performance. An interesting new application on “irrelevant data detection” is also included, which was originally often discussed as mislabeled data detection.
Discussions on the limitations of LAVA in the conclusion are valid and indeed insightful.
Disadvantages:
The motivation for developing a learning-agnostic valuation method, especially in the context of the existing works with similar dataset distance approaches could be improved.
Some small parts of the experiment details require clarification. For more details, see below.
Clarity, Quality, Novelty And Reproducibility
In the introduction, one motivation for a learning-agnostic method is that the retrainings of the models are too expensive in LOO and CGT. However, I do not think LAVA directly addresses this problem, at least about CGT, because LAVA is more like a perturbation-based contribution measure essentially similar to influence function (INF). LAVA performs a LOO-like computation and does not consider combinations (like in CGT). Also, please give formal justifications with references that the LOO and CGT methods “remain expensive” given the approximations.
This is related to the novelty of the work, and how insight it has contributed to the community. There have been attempts to use distribution divergence or dataset differences for data valuation. One reason for choosing OT is that OT is computationally tractable from finite samples: As far as I know, MMD used in [1] has similar properties. It is also essential to draw connections between distribution divergence to learning performance: Another work [2] also partially uses MMD to bound generalization performance. Discussions might be needed to examine the significance of LAVA in the context of the existing works mentioned above.
The paper has extensive applications and great performance in detecting “bad” data. However, we care about “good” data as well. Can LAVA perform data summarization tasks (another common application of data valuation) as well? I know that Appendix B.4 is relevant to this question, but I am looking at a much smaller subset size for summarization. For example, can you select 1K points to train a network? This is essentially about the effectiveness of LAVA in picking representative and highly valued datapoints. I would expect a faster increase in model performance when including the highest value points first.
In Section 3.1, it is unclear to me how to calculate the actual change in the OT distance when we perturb the datapoint probability mass by a given amount, if we calculate OT distance based on finite samples?
Can you also clarify why is the difference in groundtruth values of the calibrated gradients in Section 3.2 enough to rank all data points? Do you calculate this difference with respect to a common datapoint selected and then rank them accordingly?
Why is the feature extractor trained with the validation dataset? Will this introduce any form of bias since the validation dataset will be used for the data valuation step? Will an extractor trained on the overall training dataset work?
In Figure 3, it really confuses me that most of the baseline methods perform even worse than random guessing in terms of the detection rate. Are the baselines implemented correctly? Or do you have an explanation for this abnormal behavior? Also, it is weird that some attack/model accuracies do not start from the same point when you start to throw away data (bottom plots).
Regarding the Poisoning Attack Detection experiments, you mentioned that this task is “especially hard to detect”. However, the baseline methods actually work better than those in Backdoor Detection. In contrast, LAVA does not perform here as well as that in Backdoor Detection. Any explanation?
Minor:
Referring to the discussion on Figure 1, it would be helpful to briefly explain how do you select (or manipulate) the datasets to create different Wasserstein distances to the validation set when drawing the curves? Or alternatively, do you change the validation set?
For Appendix B.2.4, I am interested to know how does the model directly trained on the validation dataset detect bad data?
References:
[1] Incentivizing Collaboration in Machine Learning via Synthetic Data Rewards. AAAI 2022.
[2] DAVINZ: Data Valuation using Deep Neural Networks at Initialization. ICML 2022. |
ICLR | Title
LAVA: Data Valuation without Pre-Specified Learning Algorithms
Abstract
Traditionally, data valuation is posed as a problem of equitably splitting the validation performance of a learning algorithm among the training data. As a result, the calculated data values depend on many design choices of the underlying learning algorithm. However, this dependence is undesirable for many use cases of data valuation, such as setting priorities over different data sources in a data acquisition process and informing pricing mechanisms in a data marketplace. In these scenarios, data needs to be valued before the actual analysis and the choice of the learning algorithm is still undetermined then. Another side-effect of the dependence is that to assess the value of individual points, one needs to re-run the learning algorithm with and without a point, which incurs a large computation burden. This work leapfrogs over the current limits of data valuation methods by introducing a new framework that can value training data in a way that is oblivious to the downstream learning algorithm. Our main results are as follows. (1) We develop a proxy for the validation performance associated with a training set based on a non-conventional class-wise Wasserstein distance between the training and the validation set. We show that the distance characterizes the upper bound of the validation performance for any given model under certain Lipschitz conditions. (2) We develop a novel method to value individual data based on the sensitivity analysis of the class-wise Wasserstein distance. Importantly, these values can be directly obtained for free from the output of off-the-shelf optimization solvers when computing the distance. (3) We evaluate our new data valuation framework over various use cases related to detecting low-quality data and show that, surprisingly, the learning-agnostic feature of our framework enables a significant improvement over the state-of-the-art performance while being orders of magnitude faster.
1 INTRODUCTION
Advances in machine learning (ML) crucially rely on the availability of large, relevant, and highquality datasets. However, real-world data sources often come in different sizes, relevance levels, and qualities, differing in their value for an ML task. Hence, a fundamental question is how to quantify the value of individual data sources. Data valuation has a wide range of use cases both within the domain of ML and beyond. It can help practitioners enhance the model performance through prioritizing high-value data sources (Ghorbani & Zou, 2019), and it allows one to make strategic and economic decisions in data exchange (Scelta et al., 2019).
In the past literature (Ghorbani & Zou, 2019; Jia et al., 2019b; Kwon & Zou, 2021), data valuation is posed as a problem of equitably splitting the validation performance of a given learning algorithm among the training data. Formally, given a training dataset Dt = {zi}Ni=1, a validation dataset Dv , a learning algorithm A, and a model performance metric PERF (e.g., classification accuracy), a utility function is first defined over all subsets S ⊆ Dt of the training data: U(S) := PERF(A(S)). Then, the objective of data valuation is to find a score vector s ∈ RN that represents the allocation to each datapoint. For instance, one simple way to value a point zi is through leave-one-out (LOO) error U(Dt)− U(Dt \ {zi}), i.e., the change of model performance when the point is excluded from training. Most of the recent works have leveraged concepts originating from cooperative game theory (CGT), such as the Shapley value (Ghorbani & Zou, 2019; Jia et al., 2019b), Banzhaf value (Wang
∗Equal contribution. Repository publicly available on Github: https://github.com/ruoxi-jia-group/LAVA.
& Jia, 2022), general semivalues (Kwon & Zou, 2021), and Least cores (Yan & Procaccia, 2021) to value data. Like the LOO, all of these concepts are defined based on the utility function.
Since the utility function is defined w.r.t. a specific learning algorithm, the data values calculated from the utility function also depend on the learning algorithm. In practice, there are many choice points pertaining to a learning algorithm, such as the model to be trained, the type of learning algorithm, as well as the hyperparameters. The detailed settings of the learning algorithms are often derived from data analysis. However, in many critical applications of data valuation such as informing data acquisition priorities and designing data pricing mechanism, data needs to be valued before the actual analysis and the choice points of the learning algorithm are still undetermined at that time. This gap presents a main hurdle for deploying existing data valuation schemes in the real world.
The reliance on learning algorithms also makes existing data valuation schemes difficult to scale to large datasets. The exact evaluation of LOO error and CGT-based data value notions require evaluating utility functions over different subsets and each evaluation entails retraining the model on that subset: the number of retraining times is linear in the number of data points for the former, and exponential for the latter. While existing works have proposed a variety of approximation algorithms, scaling up the calculation of these notions to large datasets remains expensive. Further, learning-algorithm-dependent approaches rely on the performance scores associated with models trained on different subsets to determine the value of data; thus, they are susceptible to noise due to training stochasticity when the learning algorithm is randomized (e.g., SGD) (Wang & Jia, 2022).
This work addresses these limitations by introducing a learning-agnostic data valuation (LAVA) framework. LAVA is able to produce efficient and useful estimates of data value in a way that is oblivious to downstream learning algorithms. Our technical contributions are listed as follows.
Proxy for validation performance. We propose a proxy for the validation performance associated with a training set based on the non-conventional class-wise Wasserstein distance (Alvarez-Melis & Fusi, 2020) between the training and the validation set. The hierarchically-defined Wasserstein distance utilizes a hybrid Euclidean-Wasserstein cost function to compare the feature-label pairs across datasets. We show that this distance characterizes the upper bound of the validation performance of any given models under certain Lipschitz conditions.
Sensitivity-analysis-based data valuation. We develop a method to assess the value of an individual training point by analyzing the sensitivity of the particular Wasserstein distance to the perturbations on the corresponding probability mass. The values can be directly obtained for free from the output of off-the-shelf optimization solvers once the Wasserstein distance is computed. As the Wasserstein distance can be solved much more efficiently with entropy regularization (Cuturi, 2013), in our experiments, we utilize the duals of the entropy-regularized program to approximate the sensitivity. Remarkably, we show that the gap between two data values under the original non-regularized Wasserstein distance can be recovered exactly from the solutions to the regularized program.
State-of-the-art performance for differentiating data quality. We evaluate LAVA over a wide range of use cases, including detecting mislabeled data, backdoor attacks, poisoning attacks, noisy features, and task-irrelevant data, in which some of these are first conducted in the data valuation setting. Our results show that, surprisingly, the learning-agnostic feature of our framework enables a significant performance improvement over existing methods, while being orders of magnitude faster.
2 MEASURING DATASET UTILITY VIA OPTIMAL TRANSPORT
In this section, we consider the problem of quantifying training data utility U(Dt) without the knowledge of learning algorithms. Similar to most of the existing data valuation frameworks, we assume access to a set of validation points Dv. Our idea is inspired by recent work on using the hierarchically-defined Wasserstein distance to characterize the relatedness of two datasets (AlvarezMelis & Fusi, 2020). Our contribution here is to apply that particular Wasserstein distance to the data valuation problem and provide a theoretical result that connects the distance to validation performance of a model, which might be of independent interest.
2.1 OPTIMAL TRANSPORT-BASED DATASET DISTANCE
Background on Optimal Transport (OT). OT is a celebrated choice for measuring the discrepancy between probability distributions (Villani, 2009). Compared to other notable dissimilarity measures such as the Kullback-Leibler Divergence (Kullback & Leibler, 1951) or Maximum Mean Discrepan-
cies (MMD) (Szekely et al., 2005), the mathematically well-defined OT distance has advantageous analytical properties. For instance, OT is a distance metric, being computationally tractable and computable from finite samples (Genevay et al., 2018; Feydy et al., 2019).
The Kantorovich formulation (Kantorovich, 1942) defines the OT problem as a Linear Program (LP). Given probability measures µt, µv over the space Z , the OT problem is defined as OT(µt, µv) := minπ∈Π(µt,µv) ∫ Z2 C(z, z
′)dπ(z, z′) where Π(µt, µv) :={ π ∈ P(Z × Z) | ∫ Z π(z, z ′)dz = µt, ∫ Z π(z, z ′)dz′ = µv }
denotes a collection of couplings between two distributions µt and µv and C : Z × Z → R+ is some symmetric positive cost function (with C(z, z) = 0), respectively. If C(z, z′) is the Euclidean distance between z and z′ according to the distance metric d, then OT(µt, µv) is 2-Wasserstein distance, which we denote as WC(µt, µv) = Wd(µt, µv) := OT(µt, µv). In this work, the notation OT and W are used interchangeably, with a slight difference that we use OT to emphasize various of its formulations while W specifies on which distance metric it is computed.
Measuring Dataset Distance. We consider a multi-label setting where we denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, respectively, where V is the number of different labels. Given the training set Dt = {(xi, ft(xi))}Ni=1 of size N , and the validation set Dv = {(x′i, fv(x′i))}Mi=1 of size M , one can construct discrete measures µt(x, y) := 1 N ∑N i=1 δ(xi,yi) and µv(x, y) := 1 M ∑M i=1 δ(x′i,y′i), where δ is Dirac function. Consider that each datapoint consists of a feature-label pair (xi, yi) ∈ X × Y . While the Euclidean distance naturally provides the metric to measure distance between features, the distance between labels generally lacks a definition. Consequently, we define conditional distributions µt(x|y) := µt(x)I[ft(x)=y]∫ µt(x)I[ft(x)=y]dx and µv(x|y) := µv(x)I[fv(x)=y]∫ µv(x)I[fv(x)=y]dx . Inspired by Alvarez-Melis & Fusi (2020), we measure the distance between two labels in terms of the OT distance between the conditional distributions of the features given each label. Formally, we adopt the following cost function between featurelabel pairs: C((xt, yt), (xv, yv)) := d(xt, xv) + cWd(µt(·|yt), µv(·|yv)), where c ≥ 0 is a weight coefficient. We note that C is a distance metric since Wd is a valid distance metric. With the definition of C, we propose to measure the distance between the training and validation sets using the non-conventional, hierarchically-defined Wasserstein distance between the corresponding discrete measures: WC (µt, µv) = minπ∈Π(µt,µv) ∫ Z2 C (z, z ′) dπ (z, z′) .
Despite its usefulness and potentially broad applications, we note that it remains absent for existing research to explore its theoretical properties or establish applications upon this notion. This work aims to fill this gap by extending in both directions–novel analytical results are presented to provide its theoretical justifications while an original computing framework is proposed that extends its applications to a new scenario of datapoint valuation.
Computational Acceleration via Entropic Regularization. Solving the problem above scales cubically with MN , which is prohibitive for large datasets. Entropy-regularized OT (entropy-OT) becomes a prevailing choice for approximating OT distances as it allows for fastest-known algorithms. Using the iterative Sinkhorn algorithm (Cuturi, 2013) with almost linear time complexity and memory overhead, entropy-OT can be implemented on a large scale with parallel computing (Genevay et al., 2018; Feydy et al., 2019). Given a regularization parameter, ε > 0, entropy-OT can be formulated as OTε(µt, µv) := minπ∈Π(µt,µv) ∫ Z2 C(z, z
′)dπ(z, z′)+εH(π|µt⊗µv), where H(π|µt ⊗ µv) = ∫ Z2 log ( dπ dµtdµv ) dπ. As ε → 0, the dual solutions to the ε-entropy-OT converge to its OT counterparts as long as the latter are unique (Nutz & Wiesel, 2021).
2.2 LOWER Class-Wise Wasserstein Distance ENTAILS BETTER VALIDATION PERFORMANCE
In this paper, we propose to use WC , a non-conventional, class-wise Wasserstein distance w.r.t. the special distance function C defined in 2.1, as a learning-agnostic surrogate of validation performance to measure the utility of training data. Note that while Wasserstein distances have been frequently used to bound the learning performance change due to distribution drift (Courty et al., 2017; Damodaran et al., 2018; Shen et al., 2018; Ge et al., 2021), this paper is the first to bound the performance change by the hierarchically-defined Wasserstein distance with respect to the hybrid cost C. Figure 1 provides an empirical justification for using this novel distance metric as a proxy, and presents a relation between the class-wise Wasserstein distance and a model’s validation performance. Each curve represents a certain dataset trained on a specific model to receive its performance. Since,
each dataset is of different size and structure, their distances will be of different scale. Therefore, we normalize the distances to the same scale to present the relation between the Wasserstein distance and model performance, which shows that despite different datasets and models, with increased distance, the validation performance decreases.
The next theorem theoretically justifies using this Wasserstein distance as a proxy for validation performance of a model. With assumptions on Lipschitzness of the downstream model as well as the labeling functions associated with the training and validation sets (as explicated in Appendix A), we show that the discrepancy between the training and validation performance of a model is bounded by the hierarchically-defined Wasserstein distance between the training and the validation datasets.
Theorem 1. We denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, where V is the number of different labels. Let f : X → [0, 1]V be
the model trained on training data. By definitions, we have that ∥f(·)∥, ∥ft(·)∥, ∥fv(·)∥ ≤ V . Let µt, µv be the training and validation distributions, respectively, and let µt(·|y) and µv(·|y) be the corresponding conditional distributions given label y. Assume that the model f is ϵ-Lipschitz and the loss function L : {0, 1}V × [0, 1]V → R+ is k-Lipschitz in both inputs. Define cost function C between (xv, yv) and (xt, yt) as C((xt, yt), (xv, yv)) := d(xt, xv)+cWd(µt(·|yt), µv(·|yv)), where c is a constant. Under a certain cross-Lipschitzness assumption for ft and fv detailed in Appendix A, we have Ex∼µv(x) [L(fv(x), f(x))] ≤ Ex∼µt(x) [L(ft(x), f(x))] + kϵWC(µt, µv) +O(kV ).
Proofs are deferred to Appendix A. The bound is interesting to interpret. The first term on the right-hand side corresponds to the training performance. In practice, when a model with large enough capacity is used, this term is small. The second one is the exact expression of the Wasserstein distance that we propose to use as a proxy for validation performance. The last error term is due to possible violation of the cross-Lipschitzness assumption for ft and fv. This term will be small if ft and fv assign the same label to close features with high probability. If the last term is small enough, it is possible to use the proposed Wasserstein distance as proxy for validation loss provided that f , ft and fv verify the cross-Lipschitz assumptions. The bound resonates with the empirical observation in Figure 1 that with lower distance between the training and the validation data, the validation loss of the trained model decreases.
3 EFFICIENT VALUATION OF INDIVIDUAL DATAPOINTS
Note that the class-wise Wasserstein distance defined in the previous section can be used to measure the utility for subsets of Dt. Given this utility function, one can potentially use existing CGT-based notions such as the Shapley value to measure the contribution of individual points. However, even approximating these notions requires evaluating the utility function on a large number of subsets, which incurs large extra computation costs. In this section, we introduce a new approach to valuating individual points. Remarkably, our values can be directly obtained for free from the output of off-the-shelf optimization solvers once the proposed Wasserstein distance between the full training and testing datasets is computed.
3.1 DATAPOINT VALUATION VIA PARAMETER SENSITIVITY
OT distance is known to be insensitive to small differences while also being not robust to large deviations (Villani, 2021). This feature is naturally suitable for detecting abnormal datapoints— disregarding normal variations in distances between clean data while being sensitive to abnormal distances of outlying points. We propose to measure individual points’ contribution based on the gradient of the OT distance to perturbations on the probability mass associated with each point.
Gradients are local information. However, unlike widely used influence functions that only hold for infinitesimal perturbation (Koh & Liang, 2017), gradients for LP hold precisely in a local range and still encode partial information beyond that range, making it capable of reliably predicting the change to the OT distance due to adding or removing datapoints without the need of re-calculation. Also, the gradients are directed information, revealing both positive and negative contributions for each
datapoint and allowing one to perform ranking of datapoints based on the gradient values. Finally, the OT distance always considers the collective effect of all datapoints in the dataset.
Leveraging the duality theorem for LP, we rewrite the original OT problem (introduced in 2.1) in the equivalent form: OT(µt, µv) := max(f,g)∈C0(Z)2⟨f, µt⟩ + ⟨g, µv⟩, where C0(Z) is the set of all continuous functions, f and g are the dual variables. Let π∗ and (f∗, g∗) be the corresponding optimal solutions to the primal and dual problems. The Strong Duality Theorem indicates that OT(π∗(µt, µv)) = OT(f
∗, g∗), where the right-hand side is the distance parameterized by µt and µv. From the Sensitivity Theorem (Bertsekas, 1997), we have that the gradient of the distance w.r.t. the probability mass of datapoints in the two datasets can be expressed as follows: ∇µt OT(f∗, g∗) = (f∗)T , ∇µv OT(f∗, g∗) = (g∗)T . Note that the original formulation in 2.1 is always redundant as the constraint ∑N i=1 µt(zi) = ∑M i=1 µv(z ′ i) = 1 is already implied, rendering the dual solution to be non-unique. To address this issue, we first remove any one of the constraints in Π(µt, µv) and make the primal formulation non-degenerate. Then, we assign a value of zero to the dual variable corresponding to that removed primal constraint.
When measuring the gradients of the OT distance w.r.t. the probability mass of a given datapoint in each dataset, we calculate the calibrated gradient as
∂OT(µt, µv)
∂µt(zi) = f∗i − ∑ j∈{1,...N}\i f∗j N − 1 , ∂OT(µt, µv) ∂µv(z′i) = g∗i − ∑ j∈{1,...M}\i g∗j M − 1 , (1)
which represents the rate of change in the OT distance w.r.t the change of the probability mass of a given datapoint along the direction ensuring the probability mass for all datapoints in the dataset always sums up to one (explicitly enforcing the removed constraint). The value of calibrated gradients is independent of the choice of selection during the constraint removal.
Datapoint valuation via calibrated gradients. The calibrated gradients predict how the OT distance changes as more probability mass is shifted to a given datapoint. This can be interpreted as a measure of the contribution of the datapoint to the OT distance. The contribution can be positive or negative, suggesting shifting more probability mass to this datapoint would result in an increase or decrease of the dataset distance, respectively. If we want a training set to match the distribution of the validation dataset, then removing datapoints with large positive gradients while increasing datapoints with large negative gradients can be expected to reduce their OT distance. As we will show later, the calibrated gradients can provide a tool to detect abnormal or irrelevant data in various applications.
Radius for accurate predictions. The Linear Programming theories (Bertsimas & Tsitsiklis, 1997) give that for each non-degenerate optimal solution, we are always able to perturb parameters on the right-hand side of primal constraints (Π(µt, µv) in 2.1) in a small range without affecting the optimal solution to the dual problem. When the perturbation goes beyond a certain range, the dual solution becomes primal infeasible and the optimization problem needs to be solved again. Hence, the calibrated gradients are local information and we would like to know the perturbation radius such that the optimal dual solution remains unchanged—i.e., whether this range is large enough such that the calibrated gradients can accurately predict the actual change to the OT distance. If the perturbation goes beyond this range, the prediction may become inaccurate as the dual solution only encodes partial information about the optimization.
In our evaluation, we find that this range is about 5% to 25% of the probability measure of the datapoint (µ(·)(zi)) for perturbations in both directions and the pattern seems independent of the size of the datasets. This range being less than the probability mass of a datapoint suggests that we are only able to predict the change to the OT distance for removing/adding a datapoint to the dataset approximately, though, the relative error is well acceptable (depicted in Figure 2).
3.2 PRECISE RECOVERY OF RANKING FOR DATA VALUES OBTAINED FROM ENTROPY-OT
Due to computational advantages of the entropy-OT (defined in Eq. 2.1), one needs to resort to the solutions to entropy-OT to calculate data values. We quantify the deviation in the calibrated gradients caused by the entropy regularizer. This analysis provides foundations on the potential impact of the deviation on the applications built on these gradients. Theorem 2. Let OT(µt, µv) and OTε(µt, µv) be the original formulation and entropy penalized formulation (as defined in 2.1) for the OT problem between the empirical measures µt and µv associated with the two datasets Dt and Dv, respectively, where |Dt| = N and |Dv| = M . Then,
for any i ̸= j ̸= k ∈ {1, 2, . . . , N} and o ̸= p ̸= q ∈ {1, 2, . . . ,M}, the difference between the calibrated gradients for two datapoints zi and zk in dataset Dt and the difference for z′p and z′q in Dv can be calculated as ∂OT(µt, µv)
∂ µt(zi) − ∂OT(µt, µv) ∂ µt(zk) = ∂OTε(µt, µv) ∂ µt(zi) − ∂OTε(µt, µv) ∂ µt(zk) −ε· N N − 1 · ( 1 (π∗ε )kj − 1 (π∗ε )ij ) , (2)
∂OT(µt, µv) ∂ µv(z′p) −∂OT(µt, µv) ∂ µv(z′q) = ∂OTε(µt, µv) ∂ µv(z′p) −∂OTε(µt, µv) ∂ µv(z′q) −ε· M M − 1 · ( 1 (π∗ε )qo − 1 (π∗ε )po ) , (3)
where π∗ε is the optimal primal solution to the entropy penalized OT problem defined in 2.1, zj is any datapoint in Dt other than zi or zk, and z′o is any datapoint in Dv other than z′p or z′q .
The gradient difference on the left-hand side of (2) represents the groundtruth value difference between two training points zi and zk as the values are calculated based on the original OT formulation. In practice, for the sake of efficiency, one only solves the regularized formulation instead and, therefore, this groundtruth difference cannot be obtained directly. Theorem 2 nevertheless indicates a very interesting fact that one can calculate the groundtruth difference based on the solutions to the regularized problem, because every term in the right-hand side only depends on the solutions to the regularized problem. Particularly, the groundtruth value difference is equal to the value difference produced by the regularized solutions plus some calibration terms that scale with ε (Nutz & Wiesel, 2021). This result indicates that while it is not possible to obtain individual groundtruth value by solving the regularized problem, one can actually exactly recover the groundtruth value difference based on the regularized solutions. In many applications of data valuation such as data selection, it is the order of data values that matters (Kwon & Zou, 2021). For instance, to filter out low-quality data, one would first rank the datapoints based on their values and then throw the points with lowest values. In these applications, solving the entropy-regularized program is an ideal choice—which is both efficient and recovers the exact ranking of datapoint values. Finally, note that Eq. 3 presents a symmetric result for the calibrated gradients for validation data. In our experiments, we set ϵ = 0.1, rendering the corresponding calibration terms to be negligible. As a result, we can directly use the calibrated gradients solved by the regularized program to rank datapoint values.
4 EXPERIMENTS
In this section, we demonstrate the practical efficacy and efficiency of LAVA on various classification datasets. We compare with nine baselines: (1) Influence functions (INF) (Koh & Liang, 2017), which approximates the LOO error with first-order extrapolation; (2) TracIn-Clean (Pruthi et al., 2020), which accumulates the loss change on validation data during training whenever the training point of interest is sampled; (3) TracIn-Self (Pruthi et al., 2020), which is similar to TracIn-Clean but accumulates the training loss changes; (4) KNN-Shapley (KNN-SV) (Jia et al., 2019a), which
Backdoor Detection (5%) Poison Detection (0.1%) Noisy Features (25%) Noisy Labels (25%)
approximates the Shapley value using K-Nearest-Neighbor as a proxy model; and (5) Random, a setting where we select a random subset from the target dataset. We also consider the popular data valuation approaches: (6) Permutation Sampling-based Shapely value (Perm-SV) (Jia et al., 2019b), (7) Least Cores (LC) (Yan & Procaccia, 2021), (8) TMC-Shapley (TMC-SV) and (9) G-Shapley (G-SV) (Ghorbani & Zou, 2019). Baselines (6)-(9) are, however, computationally infeasible for the scale of data that we study here. So we exclude them from the evaluation of efficacy in different use cases. We also provide a detailed runtime comparison of all baselines. For all methods to be compared, a validation set of 10, 000 samples is assumed. For our method, we first use the validation data to train a deep neural network model PreActResNet18 (He et al., 2016) from scratch for feature extraction. Then, from its output, we compute the class-wise Wasserstein distance and the calibrated gradients for data valuation. Details about datasets, models, hyperparameter settings, and ablation studies of the hyperparameters and validation sizes are provided in Appendix B.
We evaluate on five different use cases of data valuation: detecting backdoor attack, poisoning attack, noisy features, mislabeled data, and irrelevant data. The first four are conventional tasks in the literature and the last one is a new case. All of them have a common goal of identifying “low-quality” training points. To achieve this goal, we rank datapoints in ascending order of their values and remove some number of points with lowest data values. For each removal budget, we calculate the detection rate, i.e., the percentage of the points that are truly bad within the removed points.
Backdoor Attack Detection. A popular technique of introducing backdoors to models is by injecting maliciously constructed data into a training set (Zeng et al., 2021). At test time, any trained model would misclassify inputs patched with a backdoor trigger as the adversarially-desired target class. In the main text, we consider the Trojan Square attack, a popular attack algorithm (Liu et al., 2017), which injects training points that contain a backdoor trigger and are relabeled as a target class. The evaluation of other types of backdoor attacks can be found in Appendix B. To simulate this attack, we select the target attack class Airplane and poison 2500 (5%) samples of the total CIFAR-10 training set (50k) with a square trigger. In Figure 3 I.(a), we compare the detection rates of different data valuation methods. LAVA and TracIn-Clean outperform the others by a large margin. In particular, for LAVA, the first 20% of the points that it removes contain at least 80% of the poisoned data. We also evaluate whether the model trained after the removal still suffers from the backdoor vulnerability. To perform this evaluation, we calculate the attack accuracy, i.e., the accuracy of the model trained on the remaining points to predict backdoored examples as the target label. A successful data removal would yield a lower attack accuracy. Figure 3 I.(b) shows that our method already takes effect in the early stages, whereas other baselines can start defending from the attack only after removing over 13, 000 samples. The efficacy of LAVA is in part attributable to inspection of distances between both features and labels. The backdoored training samples that are poisoned to the target class will be
“unnatural” in that class, i.e., they have a large feature distance from the original samples in the target class. While the poisoned examples contain a small feature perturbation compared to the natural examples from some other classes, their label distance to them is large because their labels are altered.
Poisoning Attack Detection. Poisoning attacks are similar to backdoor attacks in the sense that they both inject adversarial points into the training set to manipulate the prediction of certain test examples. However, poisoning attacks are considered unable to control test examples. We consider a popular attack termed “feature-collision” attack (Shafahi et al., 2018), where we select a target sample from the Cat class test set and blend the selected image with the chosen target class training samples, Frog in our case. In this attack, we do not modify labels and blend the Cat image only into 50 (0.1%) samples of Frog, which makes this attack especially hard to detect. During inference time, we expect the attacked model to consistently classify the chosen Cat as a Frog. In Figure 3 II.(a), we observe that LAVA outperforms all baselines and achieves an 80% detection rate by removing only 11k samples, which is around 60% fewer samples than the highest baseline. Figure 3 II.(b) shows that by removing data according to LAVA ranking, the target model has reduced the confidence of predicting the target Cat sample as a Frog to below 40%. Our technique leverages the fact that the features from a different class are mixed with the features of the poisoned class, which increases the feature distance between the poisoned and non-poisoned Frog examples.
Noisy Feature Detection. While adding small Gaussian noises to training samples may benefit model robustness (Rusak et al., 2020), strong noise, such as due to sensor failure, can significantly affect the model performance. We add strong white noise to 25% of all CIFAR-10 dataset without changing any labels. Our method performs extremely well as shown in Figure 3 III.(a) and detects all 12,500 noisy samples by inspecting less than 15,000 samples. This explains the sudden drop of the model’s accuracy at the removal budget of 15,000 samples in Figure 3 III.(b): the model starts throwing away only clean samples from that point. LAVA performs well in this scenario since the strong noise increases the feature distance significantly.
Mislabeled Data Detection. Due to the prevalence of human labeling errors (Karimi et al., 2020), it is crucial to detect mislabeled samples. We shuffle labels of 25% samples in the CIFAR-10 dataset to random classes. Unlike backdoor and poisoning attacks, this case is especially harder to detect since wrong samples are spread out throughout classes instead of all placed inside a target class. However, as shown in Figure 3 IV.(a), LAVA’s detection rate outperforms other base-
lines and the model performance is maintained even after 20k of removed data (Figure IV.(b)).
Irrelevant Data Detection. Often the collected datasets through web scraping have irrelevant samples in given classes (Northcutt et al., 2021; Tsipras et al., 2020), e.g., in a class of Glasses, we might have both water glass and eyeglasses due to lack of proper inspection or class meaning specification. This case is different from the mislabeled data scenario, in which case the training features are all relevant to the task. Since the irrelevant examples are highly likely to have completely different features than the desired class representation, LAVA is expected to detect these examples. We design an experiment where we remove all images of one specific class from the classification output but split them equally to the other remaining classes as irrelevant images. As shown in Figure 4, the detection result over a class varies based on the distance between that class and the class from which irrelevant images are drawn. For instance, when Deer images are placed into the Truck class, we can detect almost 94% of all Deer images within first 500 removed images. On the other hand, when we place Cat images into dog class, our detection rate drops to 45% within the top 500.
Computational Efficiency. So far, we have focused on the method’s performance without considering the actual runtime. We compare the runtime-performance tradeoff on the CIFAR-10 example of 2000 samples with 10% backdoor data, a scale in which every baseline can be executed in a reasonable time. As shown in Figure 5, our method achieves a significant improvement in efficiency while being able to detect bad data more effectively.
Dependence on Validation Data Size. For current experiments, we have assumed the validation set of size 10K. Such a scale of data is not hard to acquire, as one can get high-quality data from crowdsourcing platforms, such as Amazon Mechanical Turk for $12 per each 1K samples (AWS, 2019). While our method achieves remarkable performance when using 10K validation data, we perform ablation study on much smaller sets (Appendix B.2.1), where LAVA, notably, can still outperform other baselines. As an example on mislabeled data detection, our method with 2K validation data achieves 80% detection rate at data removal budget of 25K (Fig. 9), whereas the best performing baseline achieves such a performance with 5 times bigger validation data, 10K (Fig. 3 IV.(a)). Furthermore, even on a tiny validation set of size 500, LAVA consistently outperforms all the baselines with the same validation size (Fig. 11). This shows that our method remains effective performance for various sizes of validation data.
5 RELATED WORK
Existing data valuation methods include LOO and influence function (Koh & Liang, 2017), the Shapley value (Jia et al., 2019b; Ghorbani & Zou, 2019; Wang & Jia, 2023), the Banzhaf value (Wang & Jia, 2022), Least Cores (Yan & Procaccia, 2021), Beta Shapley (Kwon & Zou, 2021), and reinforcement learning-based method (Yoon et al., 2020). However, they all assume the knowledge of the underlying learning algorithms and suffer large computational complexity. The work of Jia et al. (2019a) has proposed to use K-Nearest Neighbor Classifier as a default proxy model to perform data valuation. While it can be thought of as a learning-
agnostic data valuation method, it is not as effective and efficient as our method in distinguishing data quality. Xu et al. (2021) propose to use the volume to measure the utility of a dataset. Volume is agnostic to learning algorithms and easy to calculate because is defined simply as the square root of the trace of feature matrix inner product. However, the sole dependence on features makes it incapable of detecting bad data caused by labeling errors. Moreover, to evaluate the contribution of individual points, the authors propose to resort to the Shapley value, which would still be expensive for large datasets.
6 DISCUSSION AND OUTLOOK
This paper describes a learning-agnostic data valuation framework. In particular, in contrast to existing methods which typically adopt model validation performance as the utility function, we approximate the utility of a dataset based on its class-wise Wasserstein distance to a given validation set and provide theoretical justification for this approximation. Furthermore, we propose to use the calibrated gradients of the OT distance to value individual datapoints, which can be obtained for free if one uses an off-the-shelf solver to calculate the Wasserstein distance. Importantly, we have tested on various datasets, and our LAVA framework can significantly improve the state-of-the-art performance of using data valuation methods to detect bad data while being substantially more efficient. Due to the stochasticity of ML and the inherent tolerance to noise, it is often challenging to identify low-quality data by inspecting their influence on model performance scores. The take-away from our empirical study is that despite being extensively adopted in the past, low-quality data detection through model performance changes is actually suboptimal; lifting the dependence of data valuation on the actual learning process provides a better pathway to distinguish data quality.
Despite the performance and efficiency improvement, our work still has some limitations. As a result, it opens up many new investigation venues: (1) How to further lift the dependence on validation data? While a validation set representative of the downstream learning task is a common assumption in the ML literature, it may or may not be available during data exchange. (2) Our design could be vulnerable to existing poisons that directly or indirectly minimize the similarity to clean data (Huang et al., 2021; Pan et al., 2022). Further investigation into robust data valuation would be intriguing. (3) Our current method does not have enough flexibility for tasks that aim for goals beyond accuracy, e.g., fairness. Folding other learning goals in is an exciting direction. (4) Customizing the framework to natural language data is also of practical interest.
7 ACKNOWLEDGEMENTS
RJ and the ReDS Lab gratefully acknowledge the support from the Cisco Research Award, the Virginia Tech COE Fellowship, and the NSF CAREER Award. Jiachen T. Wang is supported by Princeton’s Gordon Y. S. Wu Fellowship. YZ is supported by the Amazon Fellowship.
APPENDIX A RESTATEMENT OF THEOREMS AND FULL PROOFS
In this section, we will restate our main results and give full proofs.
A.1 SUMMARY OF NOTATIONS
Let µt, µv be the training distribution and validation distribution, respectively. We denote ft : X → {0, 1}V , fv : X → {0, 1}V as the labeling functions for training and validation data, where V is the number of different labels. We can then denote the joint distribution of random datalabel pairs (x, ft(x))x∼µt(x) and (x, fv(x))x∼µv(x) as µ ft t and µfvv , respectively, which are the same notations as µt and µv but made with explicit dependence on ft and fv for clarity. The distributions of (ft(x))x∼µt(x), (fv(x))x∼µv(x) are denoted as µft , µfv , respectively. Besides, we define conditional distributions µt(x|y) := µt(x)I[ft(x)=y]∫ µt(x)I[ft(x)=y]dx and µv(x|y) := µv(x)I[fv(x)=y]∫ µv(x)I[fv(x)=y]dx . Let f : X → [0, 1]V be the model trained on training data and L : {0, 1}V × [0, 1]V → R+ be the loss function. We denote π ∈ Π(µ1, µ2) as a coupling between a pair of distributions µ1, µ2 and d : X × X → R as a distance metric function. The 1-Wasserstein distance with respect to distance function d between two distributions µ1, µ2 is defined as Wd(µ1, µ2) := infπ∈Π(µ1,µ2) E
(x,y)∼π [d(x, y)]. More generally, the 1-Wasserstein
distance with respect to cost function C is defined as WC(µ1, µ2) := infπ∈Π(µ1,µ2) E (x,y)∼π [C(x, y)].
A.2 STATEMENT OF ASSUMPTIONS
To prove Theorem 1, we need the concept of probabilistic cross-Lipschitzness, which assumes that two labeling functions should produce consistent labels with high probability on two close instances. Definition 3 (Probabilistic Cross-Lipschitzness). Two labeling functions ft : X → {0, 1}V and fv : X → {0, 1}V are (ϵ, δ)-probabilistic cross-Lipschitz w.r.t. a joint distribution π over X × X if for all ϵ > 0:
P(x1,x2)∼π[∥ft(x1)− fv(x2)∥ > ϵd(x1, x2)] ≤ δ. (4)
Intuitively, given labeling functions ft, fv and a coupling π, we can bound the probability of finding pairs of training and validation instances labelled differently in a (1/ϵ)-ball with respect to π.
Our Assumptions. Assuming that f is an ϵ-Lipschitz function. Given a metric function d(·, ·), we define a cost function C between (xt, yt) and (xv, yv) as
C((xt, yt), (xv, yv)) := d(xt, xv) + cWd(µt(·|yt), µv(·|yv)), (5)
where c is a constant. Let π∗x,y be the coupling between µ ft t , µ fv v such that
π∗x,y := arg inf π∈Π(µftt ,µ fv v ) E((xt,yt),(xv,yv))∼π[C((xt, yt), (xv, yv))]. (6)
We define two couplings π∗ and π̃∗ between µt(x), µv(x) as follows:
π∗(xt, xv) := ∫ Y ∫ Y π∗x,y((xt, yt), (xv, yv)) dytdyv. (7)
For π̃∗, we first need to define a coupling between µft , µfv :
π∗y(yt, yv) := ∫ X ∫ X π∗x,y((xt, yt), (xv, yv)) dxtdxv (8)
and another coupling between µftt , µ fv v :
π̃∗x,y((xt, yt), (xv, yv)) := π ∗ y(yt, yv)µt(xt|yt)µv(xv|yv). (9)
Finally, π̃∗ is constructed as follows:
π̃∗(xt, xv) := ∫ Y ∫ Y π∗y(yt, yv)µt(xt|yt)µv(xv|yv) dytdyv. (10)
It is easy to see that all joint distributions defined above are couplings between the corresponding distribution pairs.
We assume that ft, fv are (ϵtv, δtv)-probabilistic cross-Lipschitz with respect to π̃∗ in metric d. Additionally, we assume that ϵtv/ϵ ≤ c and the loss function L is k-Lipschitz in both inputs. Besides, from their definitions above, we have that ∥f(x)∥, ∥ft(x)∥, ∥fv(x)∥ ≤ V . The assumption of probabilistic cross-Lipschitzness would be violated only when the underlying coupling assigns large probability to pairs of training-validation features that are close enough (within 1/ϵtv-ball) but labeled differently. However, π̃∗ is generally not such a coupling. Note that π∗ is the optimal coupling between training and validation distributions that minimizes a cost function C pertaining to both feature and label space. Hence, π∗y(yt, yv), the marginal distribution of π
∗ over the training and validation label space, tends to assign high probability to those label pairs that agree. On the other hand, π̃∗x,y can be thought of as a coupling that first generates training-validation labels from π∗y and then generates the features in each dataset conditioning on the corresponding labels. Hence, the marginal distribution π̃∗ of training-validation feature pairs generated by π̃∗x,y would assign high likelihood to those features with the same labels. So, conceptually, the probabilistic cross-Lipschitzness assumption should be easily satisfied by π̃∗.
A.3 DETAILED PROOF
Theorem 1 (restated). Given the above assumptions, we have
Ex∼µv(x) [L(fv(x), f(x))] ≤ Ex∼µt(x) [L(ft(x), f(x))] + kϵWC(µ ft t , µ fv v ) + 2kV δtv. (11)
Proof.
Ex∼µv(x)[L(fv(x), f(x))] (12) = Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))] + Ex∼µt(x)[L(ft(x), f(x))] (13) ≤ Ex∼µt(x)[L(ft(x), f(x))] +
∣∣Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))]∣∣ . (14) We bound
∣∣Ex∼µv(x) [L(fv(x), f(x))]− Ex∼µt(x) [L(ft(x), f(x))]∣∣ as follows: ∣∣Ex∼µv(x) [L(fv(x), f(x))]− Ex∼µt(x) [L(ft(x), f(x))]∣∣ (15) =
∣∣∣∣∫ X2 [L(fv(xv), f(xv))− L(ft(xt), f(xt))] dπ∗(xt, xv) ∣∣∣∣ (16)
= ∣∣∣∣∫ X2 [L(fv(xv), f(xv))− L(fv(xv), f(xt)) + L(fv(xv), f(xt))− L(ft(xt), f(xt))] dπ∗(xt, xv) ∣∣∣∣ (17)
≤ ∫ X2
|L(fv(xv), f(xv))− L(fv(xv), f(xt))| dπ∗(xt, xv)︸ ︷︷ ︸ U1
(18)
+ ∫ X2
|L(fv(xv), f(xt))− L(ft(xt), f(xt))| dπ∗(xt, xv)︸ ︷︷ ︸ U2 , (19)
where the last inequality is due to triangle inequality.
Now, we bound U1 and U2 separately. For U1, we have U1 ≤ k ∫ X 2 ∥f(xv)− f(xt)∥ dπ∗(xt, xv) (20)
≤ kϵ ∫ X 2 d(xt, xv) dπ ∗(xt, xv), (21)
where both inequalities are due to Lipschitzness of L and f . In order to bound U2, we first recall that π∗y(yt, yv) = ∫ X ∫ X π ∗ x,y((xt, yt), (xv, yv)) dxtdxv and π̃∗x,y((xt, yt), (xv, yv)) := π ∗ y(yt, yv)µt(xt|yt)µv(xv|yv):
Observe that
U2 = ∫ X 2 ∫ Y2 |L(fv(xv), f(xt))− L(ft(xt), f(xt))| dπ∗x,y((xt, yt), (xv, yv)) (22)
= ∫ Y2 ∫ X 2 |L(yv, f(xt))− L(yt, f(xt))| dπ∗x,y((xt, yt), (xv, yv)) (23)
≤ k ∫ Y2 ∫ X 2 ∥yv − yt∥ dπ∗x,y((xt, yt), (xv, yv)) (24)
= k ∫ Y2 ∥yv − yt∥ dπ∗y(yt, yv), (25)
where the second equality is due to a condition that if yt ̸= ft(xt) or yv ̸= fv(xv), then π∗x,y((xt, yt), (xv, yv)) = 0.
Now we can bound U2 as follows: U2 ≤ k ∫ Y2 ∥yv − yt∥ dπ∗y(yt, yv) (26)
= k ∫ X 2 ∫ Y2 ∥yv − yt∥ dπ̃∗x,y((xt, yt), (xv, yv)) (27)
= k ∫ Y2 ∫ X 2 ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)), (28)
where the last step holds since if yt ̸= ft(xt) or yv ̸= fv(xv) then π̃∗x,y((xt, yt), (xv, yv)) = 0.
Define the region A = {(xt, xv) : ∥fv(xv)− ft(xt)∥ < ϵtvd(xt, xv)}, then
k ∫ Y2 ∫ X 2 ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (29)
= k ∫ Y2 ∫ X 2\A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (30)
+ k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (31)
≤ k ∫ Y2 ∫ X 2\A 2V dπ̃∗x,y((xt, yt), (xv, yv)) (32)
+ k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)). (33)
Let’s define f̃t(xt) = ft(xt) and f̃v(xv) = fv(xv) if (xt, xv) ∈ A, and f̃t(xt) = f̃v(xv) = 0 otherwise (note that ∥f̃v(xv) − f̃t(xt)∥ ≤ ϵtvd(xt, xv) for all (xt, xv) ∈ X 2), then we can bound the second term as follows:
k ∫ Y2 ∫ A ∥fv(xv)− ft(xt)∥ dπ̃∗x,y((xt, yt), (xv, yv)) (34)
≤ k ∫ Y2 dπ∗y(yt, yv) ∫ A ∥fv(xv)− ft(xt)∥ dµt(xt|yt)dµv(xv|yv) (35)
= k ∫ Y2 dπ∗y(yt, yv) ∫ X 2 ∥∥∥f̃v(xv)− f̃t(xt)∥∥∥ dµt(xt|yt)dµv(xv|yv) (36) = k
∫ Y2 dπ∗y(yt, yv) ∥∥∥Exv∼µv(·|yv)[f̃v(xv)]− Ext∼µv(·|yt)[f̃t(xt)]∥∥∥ (37)
≤ kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)). (38)
Inequality (38) is a consequence of the duality form of the Kantorovich-Rubinstein theorem (Villani (2021), Chapter 1).
Combining two parts, we have U2 ≤ k ∫ Y2 ∫ X 2\A 2V dπ̃∗x,y((xt, yt), (xv, yv)) (39)
+ kδtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)) (40)
≤ 2kV δtv + kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)), (41)
where the last step is due to the probabilistic cross-Lipschitzness of ft, fv with respect to π̃∗x,y .
Now, combining the bound for U1 and U2, we have
Ex∼µv(x)[L(fv(x), f(x))]− Ex∼µt(x)[L(ft(x), f(x))] (42) ≤ kϵ ∫ X 2 d(xt, xv)dπ(xt, xv) + 2kV δtv + kϵtv ∫ Y2 dπ∗y(yt, yv)Wd(µt(·|yt), µv(·|yv)) (43)
= k ∫ (X×Y)2 [ϵd(xt, xv) + ϵtvWd(µt(·|yt), µv(·|yv))] dπ∗x,y((xt, yt), (xv, yv)) + 2kV δtv (44)
≤ k ∫ (X×Y)2 [ϵd(xt, xv) + cϵWd(µt(·|yt), µv(·|yv))] dπ∗x,y((xt, yt), (xv, yv)) + 2kV δtv (45)
= kϵEπ∗x,y [C((xt, yt), (xv, yv))] + 2kV δtv (46)
= kϵWC(µ ft t , µ fv v ) + 2kV δtv, (47)
where the last step is due to the definition of π∗x,y . This leads to the final conclusion.
Theorem 5 (restated). Let OT(µt, µv) and OTε(µt, µv) be the original formulation and entropy penalized formulation (as defined in Subsection 2.1) for the OT problem between the empirical measures µt and µv associated with the two data sets Dt and Dv, respectively. Then, for any i ̸= j ̸= k ∈ {1, 2, ...N} and o ̸= p ̸= q ∈ {1, 2, ...M}, the difference between the calibrated gradients for two datapoints zi and zk in dataset Dt and the difference for z′p and z ′ q in Dv can be calculated as
∂OT(µt, µv) ∂ µt(zi) − ∂OT(µt, µv) ∂ µt(zk) = ∂OTε(µt, µv) ∂ µt(zi) − ∂OTε(µt, µv) ∂ µt(zk) − ε · N N − 1 · ( 1 (π∗ε )kj − 1 (π∗ε )ij ) ,
∂OT(µt, µv) ∂ µv(z′p) − ∂OT(µt, µv) ∂ µv(z′q) = ∂OTε(µt, µv) ∂ µv(z′p) − ∂OTε(µt, µv) ∂ µv(z′q) − ε · M M − 1 · ( 1 (π∗ε )oq − 1 (π∗ε )op ) ,
where π∗ε is the optimal primal solution to the entropy penalized OT problem, zj is any datapoint in Dt other than zi or zk, z′o is any datapoint in Dv other than z ′ p or z ′ q , |Dt| = N , and |Dv| = M .
Proof. Let L(π, f, g) and Lε(πε, fε, gε) be the Lagrangian functions for original formulation and entropy penalized formulation between the datasets Dt and Dv , respectively, which can be written as
L(π, f, g) = ⟨π, c⟩+ N∑ i=1 fi · (π′i · IN − ai) + M∑ j=1 gj · (I ′M · πj − bj),
Lε(πε, fε, gε) = ⟨πε, c⟩+ ε · N∑ i=1 M∑ j=1 log (πε)ij µt(zi) · µv(zj) + N∑ i=1 (fε)i · [(πε)′i · IM − µt(zi))]
+ M∑ j=1 (gε)j · [I ′N · (πε)j − µv(zj)],
where cN×M is the cost matrix consisting of distances between N datapoints in Dt and M datapoints in Dv, IN = (1, 1, ...1) ∈ RN×1 and I ′M = (1, 1, ...1)T ∈ R1×M , π and (f, g) denote the primal and dual variables, and π′i and πj denote the i th row and jth column in matrix π, respectively.
The first-order necessary condition for optima in Lagrangian Multiplier Theorem
gives that ∇Lπ(π∗, f∗, g∗) = 0 and ∇(Lε)π((πε)∗, (fε)∗, (gε)∗) = 0, where π∗ and (f∗, g∗) denote the optimal solutions to the primal and dual problems, respectively. Thus, for any i ∈ {1, 2, . . . , N} and j ∈ {1, 2, . . . ,M}, we have
∇Lπ(π∗, f∗, g∗)ij = cij + f∗i + g∗j = 0,
∇(Lε)π(π∗ε , f∗ε , g∗ε )ij = cij + ε · 1
(π∗ε )ij + (fε)
∗ i + (gε) ∗ j = 0.
Subtracting, we have
[f∗i − (fε)∗i ] + [ g∗j − (gε)∗j ] − ε · 1
(π∗ε )ij = 0.
Then, for ∀k ̸= i ∈ {1, 2, ...N}, we have
[f∗k − (fε)∗k] + [ g∗j − (gε)∗j ] − ε · 1
(π∗ε )kj = 0.
Subtracting and reorganizing, we get
[(fε) ∗ i − (fε)∗k] = (f∗i − f∗k )− ε ·
[ 1
(π∗ε )ij − 1 (π∗ε )kj
] .
From the definition of the calibrated gradients in Eq.1, we have
∂OT(µt, µv) ∂µt(zi) − ∂OT(µt, µv) ∂µt(zk) = N N − 1 (f∗i − f∗k ) ,
∂OTε(µt, µv) ∂µt(zi) − ∂OTε(µt, µv) ∂µt(zk) = N N − 1 [(fε) ∗ i − (fε)∗k] .
Finally, subtracting and reorganizing, we have
∂OTε(µt, µv) ∂µt(zi) − ∂OTε(µt, µv) ∂µt(zk) = ∂OT(µt, µv) ∂µt(zi) − ∂OT(µt, µv) ∂µt(zk) − ε · N N − 1 · [ 1 (π∗ε )ij − 1 (π∗ε )kj ] .
The proof for the second part of the Theorem is similar.
∂OTε(µt, µv) ∂µv(z′p) − ∂OTε(µt, µv) ∂µv(z′q) = ∂OT(µt, µv) ∂µv(z′p) − ∂OT(µt, µv) ∂µv(z′q) − ε · M M − 1 · [ 1 (π∗ε )op − 1 (π∗ε )oq ] .
Then the proof is complete.
APPENDIX B ADDITIONAL EXPERIMENTAL RESULTS
B.1 EVALUATING DATA VALUATION USE CASES ON DIVERSE DATASETS
(a) (b)
In the main text, we have focused our evaluation on CIFAR-10. Here, we provide experiments to show effectiveness of LAVA on diverse datasets for detecting bad data.
Backdoor Attack Detection. We evaluate another type of backdoor attack (Section 4), which is the Hello Kitty blending attack (Blend) (Chen et al., 2017) that mixes the target class sample with the Hello Kitty image, as illustrated in Figure 8 (B). We attack the German Traffic Sign dataset (GTSRB) on the target class 6 by poisoning 1764 (5%) samples of the whole dataset. Our method achieves the highest detection rate, as shown in Figure 6(a). In particular, the 5000 points with lowest data values contain all poisoned data based on the LAVA data values, while the second best method on this task, KNN-SV, can cover all poisoned examples with around 11,000 samples. Our algorithm performs especially well for this attack, since the label of poisoned data is changed to the target class and the patching trigger is large. Both the label and feature changes contribute to the increase of the OT distance and thus ease the detection.
Noisy Feature Detection. Here, we show the usage of LAVA on the MNIST dataset where 25% of the whole dataset is contaminated by feature noise. Our method still outperforms all the baselines by detecting all noisy data within first 14,000 samples, which is 5,000 less than the best baseline would require, which is shown in Figure 6(b).
Figure 7: Visualization of irrelevant data detection within the CIFAR100 dataset. The left column is one example of the target class and the images on the right columns are selected irrelevant data in the corresponding classes detected by LAVA.
Irrelevant Data. We perform another irrelevant data detection experiment and focus on the CIFAR100 dataset. In Figure 7, we illustrate some of the irrelevant samples detected by LAVA. Intuitively, irrelevant data in the class should be easily detected by LAVA, since the images are far from the representative of the class and increasing the probability mass associated with these images leads to larger distributional distance to the clean validation data.
B.2 ABLATION STUDY
We perform an ablation study on validation size and on the hyperparameters in our method, where we provide insights on the impact of setting changes. We use the mislabeled detection use case and the CIFAR-10 dataset as an example setting for the ablation study.
B.2.1 VALIDATION SIZE
Figure 8: Visualization of each backdoor attack: A) Trojan-SQ attack. B) Blend attack. C) Trojan-WM attack.
For all the experiments in the main text, we use the validation set of size 10,000. Naturally, we want to examine the effect of the size of the validation set on the detection rate of mislabeled data. In Figure 9 (c), we illustrate the performance on the detection rate with smaller validation data sizes: 200, 500, 2, 000, and 5, 000. We observe that even reducing the validation set by half to 5, 000 can largely maintain the detection rate performance. Small validation sets (200, 500, 2, 000) degrade the detection rate by more than 50%. Despite the performance degradation, our detection performance with these small validation sizes is in fact comparable with the baselines in Figure 3 IV.(a) that
leverage the full validation size of 10, 000. Additionally, when restricting LAVA and other baselines to validation set of 500 samples, our method is better than the best baseline for detecting mislabeled data in the 50k CIFAR-10 samples with 25% being mislabeled as shown in Figure 11.
B.2.2 FEATURE WEIGHT
Recall the class-wise Wasserstein distance is defined with respect to the following distance metric: C((xt, yt), (xv, yv)) = d(xt, xv)+ cWd(µt(·|yt), µv(·|yv)). Actually, one can change the relative weight between feature distance d(xt, xv) and the label distance Wd(µt(·|yt), µv(·|yv)). Here, we show the effect of upweighting the feature distance, while keeping the label weight at 1 and the results are illustrated in Figure 9 (a). As we are moving away from uniform weight, the performance on detection rate is decreasing with larger feature weights. With feature weight of 100, our method performs similarly as the random detector. Indeed, as we increase weight on the features, the weight on the label distance is decreased. As the weight reaches 100, our method performs similarly as the feature embedder without knowing label information and hence, the mislabeled detection performance is comparable to the random baseline.
B.2.3 LABEL WEIGHT
Next, we shift focus to label weight. We examine the effect of upweighting the label distance, while keeping the feature weight at 1. In Figure 9 (b), as the label weight increases, the detection rate performance deteriorates. When we increase the label distance, the feature information becomes neglected, which is not as effective as the balanced weights between feature and label distances.
B.2.4 FEATURE EMBEDDER
We use feature embedder to extract features for the feature distance part in our method. We train the feature embedder on the accessible validation set until the convergence on the train accuracy. Different architectures of the embedder might be sensitive to different aspects of the input and thus result in different feature output. Nevertheless, as we observe in Figure 10, the detection performance associated with different model architectures of feature embedder is similar. Hence, in practice, one can flexibly choose the feature embedder to be used in tandem with our method as long as it has large enough capacity. Furthermore, we note that that these feature embedders
have not learned the clean distribution from the validation data, e.g. in CIFAR-10 the model trained on 10K validation data achieves only around 65% accuracy on 50K clean datapoints and the model trained on 500 validation data achieves around 25% accuraracy. We additionally show in Figure 14,15 that our method significantly outperforms the PreActResNet18 model trained directly on validation data of size 500 and 10K in detecting bad data, which clearly distinguishes LAVA from simple feature embedders.
B.3 BALANCING UNBALANCED DATASET
Although machine leaning practitioners might be using clean data for training a model, the dataset can be often unbalanced which can lead to model performance degradation (Thai-Nghe et al., 2009). To recover higher model accuracy, we can rebalance unbalanced datasets by removing points that cause such disproportion. We showcase how LAVA effectively rebalance the dataset by removing points with poor values and keeping points with best values. We consider a CIFAR-10 dataset with a class Frog being unbalanced and containing 5, 000 samples while other classes have only half as much (i.e. 2, 500 samples). In Figure 12, we demonstrate the effectiveness of LAVA valuation which not only reduces the dataset by removing poor value points but also improves the model accuracy. While at the same time other valuation methods were not able to steadily increase the model accuracy and quickly downgraded the model performance, which in turn shows an even stronger effectiveness of our method.
B.4 REDUCING TRAINING SET SIZE
With the growing size of the training dataset, the computation cost and memory overhead naturally increase which might deem impos-
sible for some practitioners with limited resources to train a model. Therefore, the ability to reduce the training dataset size (Sener & Savarese, 2018) will free up some computation burden and thus allow ones with limited resources to fully appreciate the model training process. Motivated by the given challenge, we want to leverage our data valuation method to significantly decrease the training dataset size while maintaining the model performance. Similarly as in the previous section, the idea is to keep a subset of datapoints with best values and remove poor valued ones. To demonstrate the effectiveness of our LAVA’s valuation, we perform such a task on a clean CIFAR-10 dataset with 2, 500 samples from each class and compare with other data valuation methods. As presented in Figure 13, it is demonstrated that the performance is well maintained even with smaller subsets of the original dataset. Remarkably, even reducing a clean training set (25,000 samples) by 15% based on our method’s valuation, the performance can still stay relatively high while outperforming other valuation baselines.
B.5 DATA SUMMARIZATION
With growing dataset sizes, grows the space needed to store data. Thus, the buyer often would like to decrease the dataset to minimize resources but to retain the performance. Unlike reducing training set size as provided in Section B.4, in this experiment, we will select a smaller, representative subset of the whole dataset that can maintain good performance. To measure the performance of each subset, we measure the validation performance of the model trained on that subset subtracted by the validation performance of the model trained on a random subset of the same size, the experiment which is performed in Kwon & Zou (2021). In Figure 16we can observe that our method can select a small subset that performs better than the subsets chosen by the baseline methods most of the time.
B.6 SCALABILITY EXPERIMENT
In the main paper, we have demonstrated time complexity comparison between LAVA and other valuation methods. We have reported runtime comparisons only for 2,000 test samples as this is the scale existing methods can solve in a not excessively long time (within a day). It showcases the advantageous computing efficiency that the proposed approach enjoys over other methods. We further want to emphasize the computational efficiency of LAVA and demonstrate computation efficiency on a larger scale dataset (100,000 samples) with higher dimensions, ImageNet-100. Additionally, we evaluate
other baselines which are able to finish within a day of computation to highlight the advantage of our method as presented in Table 1. Moreover, we highlight the near-linear time complexity of LAVA on CIFAR-10, which shows practical computation efficiency of our method as shown in Figure 17.
B.7 GENERALIZATION TO OTHER TYPES OF BACKDOOR ATTACKS
As we have provided the results of the Trojan square attack (TrojanSQ) (Liu et al., 2017) in Section 4, we now apply LAVA to other backdoor attacks, which are Hello Kitty blending attack (Blend) (Chen et al., 2017) and Trojan watermark attack (Trojan-WM) (Liu et al., 2017), and evaluate the efficacy of our method in detecting different types of backdoor attacks. We simulate these attacks by selecting the target class Airplane and poisoning 2, 500 (5%) samples of the CIFAR-10 dataset of size 50, 000. The backdoor trigger adopted in each attack is portrayed in Figure 8. In Figure 18, we observe that our method can achieve superior detection performance on all the attacks considered. The reason is that despite the difference in trigger pattern, all of these attacks modify both the label and the feature of a poisoned image and thus result in the deviation of our distributional distance that is defined over the product space of feature and label.
B.8 IMPLICATIONS OF THE PROPOSED DATA VALUATION METHOD TO REAL-WORLD DATA MARKETPLACES
One concern in the real-world data marketplace is that data is freely replicable. However, replicates of data introduce no new information and therefore the prior work has argued that a data utility function should be robust to direct data copying (Xu et al., 2021). One advantage of using the class-wise Wasserstein distance to measure data utility is that it is robust to duplication. Our method by its natural distributional formulation will ignore duplicates sets. As shown in Table 3, although we have repeated the set even five times more than the original source set, the distance remains the same. Additionally, with small noise
changes in the features, the distance metric is barely affected. Another concern in the real-world marketplace is that one might find a single data that has highest contribution and duplicate it to maximize the profit. However, again due to the nature of our distributional formulation, duplicating a single point multiple times would increase the distance between the training and the validation set due to the imbalance in training distribution caused by copying that point.
B.9 DETAILED EXPERIMENTAL SETTINGS
Datasets and Models. Table 2 summarizes the details of the dataset, the models, as well as their licenses adopted in our experiments.
Hardware. A server with an NVIDIA Tesla P100-PCIE-16GB graphic card is used as the hardware platform in this work.
Software. Dataset OT Dist
For our implementation, we use PyTorch for the main framework (Paszke et al., 2019), assisted by three main libraries, which are otdd (optimal transport calculation setup with datasets) (Alvarez-Melis & Fusi, 2020), geomloss (actual optimal transport calculation) (Feydy et al., 2019), and numpy (tool for array routines) (Harris et al., 2020). | 1. What is the focus and contribution of the paper on data valuation using optimal transport?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and soundness?
3. Do you have any concerns about the experiment setups and presentation of the results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a learning-agnostic data valuation framework based on optimal transport (OT). The authors suggest using a Class-wise Wasserstein distance between two datasets as a surrogate for validation performance, thus alleviating the need to pre-specifying the learning algorithm in advance. The authors then propose a new data valuation metric based on the gradient of the OT distance with respect to the perturbation on the probability mass associated with each data point. Even though this data valuation metric is difficult to compute, the paper points out that the difference can be calculated efficiently, allowing for an efficient ranking of the data points. The paper then presents empirical results to demonstrate the efficiency of the proposed methods on several different problems.
Strengths And Weaknesses
Strengths:
(+) The paper is sufficiently novel and the approach is interesting.
Weaknesses:
(-) The proposed approach lacks justification.
(-) The experiment setups are unclear.
(-) The presentation can be improved.
Detailed comments:
(main concern - soundness) Theorem 1 resembles a statistical learning generalization bound, where we have an out-of-sample error (in this case, the validation set’s error) upper bounded by the training set’s error and some surplus term (the OT distance). By arguing that the training set’s error is close to 0 for sufficiently complex models, the authors justify the use of OT distance as a surrogate for the training set’s error by this theorem. However, one can apply the same logic to other generalization bounds (e.g., one based on VC dimension) to justify the use of VC dimension as a surrogate for the validation error. I find that this theorem does not satisfactorily justify the use of OT distance as a surrogate for the validation error.
(main concern - soundness) Theorem 1 is vacuous and follows almost immediately from the assumptions that
L
is
k
-Lipschitz and the norm of
f
is bounded by V. Could the authors elaborate on the challenges/difficulty in the proof of Theorem 1.
The authors did not explain the experiment setups (not even in the appendix). What is the validation dataset here? And in the mislabeled or noisy feature experiments, are the validation data corrupted too? It is unclear how the proposed framework is applied in these experiments.
(presentation) In the paragraph “Radius for accurate predictions…”, can the authors be more specific about the results in Bertsimas & Tsitsiklis (1997). I find this paragraph a bit hand-wavy.
(presentation) I do not follow the argument in this sentence (at the beginning of Section 3.1): “OT distance is known to be insensitive to small differences while also being not robust to large deviations”. I think the phrase “not robust” is not appropriate here. Also, I’m not sure why that is a desirable property in data valuation.
(presentation) Please use more equation numbering: Throughout the paper, the authors refer to various quantities/definitions via section number. This is very challenging for readers to follow. Could you please put important quantities/definitions in equation environments with numbers and refer to them?
(minor aesthetic comment - presentation) The results in Figures 2 and 3 are blurry. Although I can still understand the results, it would be nicer if the authors can include clearer versions of these plots.
Clarity, Quality, Novelty And Reproducibility
Significance: The problem of data valuation is an important and contemporary problem in machine learning. The paper proposes a method to decouple the data valuation process from the learning algorithm. This is a meaningful contribution and is suitable to be presented at ICLR 2023.
Soundness: The paper proposes new solutions to the data valuation problem. However, I feel that these solutions are not satisfactorily justified. The main theoretical result (Theorem 1) seems to be very vacuous and follows almost immediately from the Lipchitzness assumption. Please refer to the section below for more detailed comments.
Novelty: The paper incorporates concepts from optimal transport to the problem of data valuation, offering an unconventional alternative to the common Shapley value (or more broadly cooperative game theory) approaches to data valuation.
Presentation: The paper is generally well-written and easy to follow. However, the authors occasionally make some arguments that are hard to follow or not well justified by the evidence. Please refer to the section below for more detailed comments. |
ICLR | Title
Exponentially Decaying Flows for Optimization in Deep Learning
Abstract
The field of deep learning has been craving for an optimization method that shows outstanding property for both optimization and generalization. We propose a method for mathematical optimization based on flows along geodesics, that is, the shortest paths between two points, with respect to the Riemannian metric induced by a non-linear function. In our method, the flows refer to Exponentially Decaying Flows (EDF), as they can be designed to converge on the local solutions exponentially. In this paper, we conduct experiments to show its high performance on optimization benchmarks (i.e., convergence properties), as well as its potential for producing good machine learning benchmarks (i.e., generalization properties).
1 INTRODUCTION
Due to recent progress in the field of machine learning, it becomes more and more important to develop and sophisticate methods of solving hard optimization problems. At the same time, in this field, such methods are additionally required to elicit decent generalization performance from statistical models. An efficient method of mathematical optimization, however, does not always produce sufficient generalization properties, since these are involved with two distinct mathematical problems; The former is to find one of the solutions which minimize a given (possibly non-convex) objective function, and the latter is to adjust parameters so that a statistical estimator achieves its best result. To address such a hard issue, we introduce a new mathematical perspective on optimization, and develop a method for machine learning based on this perspective. We then empirically show its rapid convergence rate and high compatibility with deep learning techniques, as well as good statistical properties.
In this field, many optimization methods have been proposed and modified so that they fit specific problems or models. One of the current standard methods is the gradient descent method. The method tends to converge slowly in general optimization problems. However, with various specific techniques, such as mini-batch training and batch normalization (Ioffe & Szegedy (2015)), it has been found to be efficient for state-of-the-art purposes in the field of deep learning. Another class of methods that are now becoming popular and standard is adaptive methods, such as AdaGrad (Duchi et al. (2011)) and Adam (Kingma & Ba (2015)). Compared to the gradient descent method, these methods have been shown to improve convergence rates with almost the same computational cost as the gradient descent method, but are reported to result in poor statistical outcomes in some cases of machine learning (Wilson et al. (2017)).
Other class of methods that have been thoroughly studied in the theory of mathematical optimization is second-order methods, such as the Newton method and the Gauss-Newton method. These methods possess great convergence properties, and in particular, have a potential to overcome plateau’s problems (Dauphin et al. (2014)). Furthermore, when it comes to applications in stochastic settings, the method based on the Gauss-Newton Matrix (or Fisher information Matrix) is shown to asymptotically attain the best statistical result, which is called Fisher efficiency (see Amari (1998)). Despite these attractive characteristics, the methods have not yet been spotlighted in the field of machine learning due to several severe drawbacks; They suffer from high computational cost in general and their useful properties are no longer guaranteed in practical settings (see Section 12 in Martens (2014)). One of the continuously developing second-order methods in this field, K-FAC ( Ba et al. (2017), Grosse & Martens (2016) ), successfully produced high convergence rate empirically with relatively low computational cost. However, it still requires much effort to become compatible with
some deep learning techniques. In addition, it is unclear whether the method has advantages in generalization performance.
In our approach, by introducing a Riemannian metric induced by non-linear functions, we constitute dynamical systems which describe motions along the shortest route from arbitrary initial points to the zeros of non-linear functions on the corresponding Riemannian manifold, that is, geodesic with respect to the Riemannian metric. One of the remarkable characteristics of our approach is that it enables us to flexibly design flows of such dynamical systems to control convergence rates. The results for the flows are then applicable to mathematical optimization problems, in particular, with deep neural network (DNN) models. In this paper, after providing mathematical ground of our methods, we experimentally demonstrate their performance in various aspects, from convergence rates to statistical properties.
2 GEODESIC FLOWS FOR NON-LINEAR EQUATIONS
We start by establishing some essential properties of dynamics which are effective for the analysis of non-linear equations. Let F : RN → RN be a smooth function and J be the Jacobian with variable w ∈ RN , that is, J = ∂F/∂w. In this section, we deal with the well-posed case that there exists a connected closed subset Ω ⊂ RN , where J is regular and the equation has a unique solution ξ. Therefore, the positive matrix G = JTJ induces a Riemannian metric g on Ω and (Ω, g) becomes a Riemannian manifold under some appropriate conditions. Let us then consider time evolution of variable w on this manifold. Our main purpose in this section is to study the characteristics of dynamical systems in which w(t) moves on geodesics between any point in Ω and ξ with respect to the metric g.
Let L be a Lagrangian given by
L (w, v) = 1
2 vTG(w)v (1)
with v = dw/dt (also written as ẇ). The Euler-Lagrange equation for L is then expressed as
dp dt −∇wL = 0 (2)
with momentum vector p = Gv. If the boundary condition at two points in Ω, w(t0) = w0, w(t1) = w1, is imposed on (2), a geodesic between w0 and w1 is obtained as the solution. In contrast, if we give an appropriate initial condition, w describes the motion along the geodesic from w0 to w1. In fact, the following statement holds; Theorem 2.1. Let w0 be an arbitrary point in Ω and (w(t), p(t)) be a solution of equation (2) with the following initial condition;
w(0) = w0, p(0) = −J(w0)TF (w0). (3) Then w satisfies
F ( w(t) ) = (1− t)F ( w0 ) , (4)
for t ∈ [0, 1]. In particular, w(t) passes through the point which is a solution of non-linear equation F (w) = 0 at t = 1, that is, ξ = w(1).
We briefly describe the outline of the proof for the statement above. Throughout this paper, we regard functions of w as those of t in the trivial way; for instance, F (t) = F (w(t)). Note that p can be expressed as p = JT dF/dt. Then using the Beltrami identity for (2) with the initial condition above leads to the equation
d dt F = −F0, (5)
where F0 = F (0). Thus, a closed form expression is obtained as
F (t) = (1− t)F0, (6) which gives F (1) = 0 as asserted in Theorem 2.1.
Now we take a different expression that the coefficient (1 − t) in (6) is replaced by a different monotonically decreasing function, that is,
F (t) = ρ(t)F0, (7)
where ρ denotes a monotonically decreasing smooth function from (t0, t1) onto (0, 1). Then, we give the following differential equation whose solution is of the closed form (7);
d dt F = χ · F, χ(t) = d dt ln(ρ(t)). (8)
A motion described by this differential equation differs from the one that is described by (2), but these two motions are along the same geodesic. Theorem 2.2. Let w0 ∈ Ω be an arbitrary point. The differential equation
J dw
dt = χ · F, t ∈ (t0, t1) (9)
with an initial condition w0 = w(t0) has a unique solution that satisfies F (w(t1)) = 0. Furthermore, the orbit under flow f defined by f(w0, t) = w(t) coincides with that of the geodesic equation (2).
Note that since equation (9) is equivalent to equation (8), the orbit is invariant under coordinate transformations.
With respect to the choice of ρ, the end point t1 can be set as∞ under some appropriate conditions and Theorem 2.2 still holds in the sense that
lim t→∞ F (w(t)) = 0. (10)
In particular, if we set ρ(t) = e−t, then χ(t) = −1 and F can be represented as F (t) = e−tF0, so that the convergence rate of F is exponential. Definition 2.3. Let w be a solution of the differential equation
J dw
dt = −F, t ∈ (t0, t1) (11)
with an initial condition w0 = w(t0). The flow f(w0, t) = w(t) is called an exponentially decaying flow (EDF) of non-linear equation F (w) = 0.
For the end of this section, we present a connection between EDF and the classical Newton method. If we apply the Euler method to the differential equation (9) with step size 4t, the corresponding iteration step can be written as
wi+1 = wi +4t · χ(τi) · J(wi)−1F (wi), τi+1 = τi +4t, (12) which recovers the Newton method with step size ηi = 4t · χ(τi).
3 APPLICATION TO MATHEMATICAL OPTIMIZATIONS
Consider smooth functions ϕ : RN → RM and L : RM → R. In this section, we develop a method based on EDF for a mathematical optimization problem of the form
min w∈Ω L(ϕ(w)). (13)
In this section, the area Ω ⊂ RN is supposed to be a compact subset where there exists no stationary point except for a minima.
Let F : RN → RM denote derivative ∂L/∂ϕ. An example of such problems is least squares one in which L is given by
L(ϕ) = 1
2 ‖ϕ‖2. (14)
In this case, F = ϕ. In particular, if M = N and a minimal value of loss function L̂ = L ◦ ϕ is zero, the optimization problem is equivalent to solving non-linear equation F (w) = 0.
For optimization problems, we set up a standard equation that the gradient of loss function is zero, that is, ∇L̂(w) = 0. Note that the gradient can be expressed as ∇L̂(w) = JTϕ F (w) with Jacobian Jϕ = ∂ϕ/∂w. Applying Theorem 2.2, we obtain the differential equation
H dw
dt = χ · JTϕ F, (15)
where H is the Hessian of loss function L̂ with respect to w. Since second order differentiation is typically computationally heavy, we seek other equations which possess almost the same property as (15) especially in terms of asymptotic behavior.
Let us decompose momentum Hẇ as
H dw
dt =
d
dt
( JTϕ F ) =
( d
dt Jϕ
)T F +G dw
dt , (16)
where JF denotes the Jacobian of F , and G is a symmetric matrix defined by
G = JTϕ JF = J T ϕHLJϕ (17)
with Hessian matrix HL of L with respect to ϕ. We then consider the following equation instead of (15);
G dw
dt = χ · JTϕ F. (18)
This equation no longer describes the motion along the geodesic related to (15) in general. However, if M = N and Jϕ is invertible, then w moves on another geodesic with respect to a different metric GF = J T F JF . In addition, if ρ(t) = e
−t, F converges exponentially, which implies that∇L̂ = JϕF also converges exponentially in (18) as well as in (15). In general cases, if a condition that
d
dt
( 1
2 ‖F‖2
) = χ〈JFG−1JTϕ F, F 〉 ≤ χ‖F‖2 (19)
is satisfied, then F converges to 0. This shows that in the neighborhood of solution ξ of equation ∇L̂ = 0, the momentum Gẇ sufficiently approximates Hẇ by (16). Definition 3.1. The flow given by
H dw
dt = −JTϕ F (20)
is referred to EDF of type H and written as EDF-H. Similarly, the flow given by
G dw
dt = −JTϕ F (21)
is referred to EDF of type G and written as EDF-G.
4 MODIFICATION SCHEMES FOR EDF-BASED METHODS
Like second order methods, in EDF-based methods, matrix inversion has to be carried out, which requires expensive computational cost, particularly in large systems. Moreover, we often encounter rank deficient matrices in optimization problems. To deal with rank deficiency, in general, we need pseudo-inverse matrices which are more computationally heavy. Therefore, instead of the inverse of matrix A = G,H , for fixed v ∈ RM , we consider a projection which maps r = arg minx∈RM ‖Ax − v‖ to a vector Pk(A, v) in the k-th order Krylov subspace Kk(A, v) = span{v,Av,A2v, . . . , Ak−1v} such that r = P∞(A, v). One of the basic methods to construct such a projection for indefinite symmetric systems is the minimum residual method (Paige & Saunders (1975)), which requires only the matrix multiplication. Therefore, for numerical computations, we use the following differential equation approximated by the k-th order Krylov subspace;
dw dt = χ · Pk(A, JϕF ), A = G,H. (22)
k is a hyperparameter that interpolates between the gradient method and the method based on (15) or (18). In fact, in the case that k = 1, equation (22) has the form
dw dt = c · ∇L̂, c = χ 〈u,∇L̂〉 〈u, u〉 , u = A∇L̂, (23)
which reduces to a kind of gradient methods, but the coefficient c conveys information about G or H unlike the standard gradient methods.
Next, similar to the Levenberg-Marquardt algorithm, we modify equation (18) by adding a damping factor to G in order that the method becomes more stable. So, we set
(G+ λI) dw
dt = χ · JTϕ F, (24)
where λ is a continuous positive function of t. Then, we take the same approximation as (22) with A = G+λI . The damping factor λ plays several important roles in solving practical problems. First, it has solutions well-behaved near singularities. Even in the case that G is not invertible, equation (24) can be defined and solved unlike (18). If we choose λ such that it approaches to 0 as rapidly as the gradient in (18), the asymptotic behavior of (24) is almost the same as that of (18). Particularly, in the case that χ = −1, we set λ = a‖JTϕ F‖b with a, b > 0, so that the convergence rate of (24) stays exponential. Second, the damping factor makes the method compatible with stochastic approaches such as mini-batch training, in deep learning models. Since the orbit of (24) is affected by the gradient due to the damping factor λ, the method based on this equation could take advantage of stochastic approaches as other gradient methods do. (For implementation of the algorithm, see Appendix A.)
Finally, to accelerate the EDF-based methods, it is sometimes effective to change equation (18) into a second-order differential equation, particularly in the case in which the approximation method with k is applied. Specifically, we take the equation
G d2w dt2 + κ ·Gdw dt + JTϕ F = 0, (25)
where κ is a real-valued function of t. The convergence properties of the methods are controlled by the following differential equation;
d2F
dt2 + κ · dF dt + F = 0. (26)
There are two ways of determining κ. The first one is to set κ = αwith constant α (W1), which leads to a similar scheme to the momentum gradient decent method. The other one is to set κ(t) = αt−1 (W2). In this setting, the equation can be discretized in a similar way as described in Su et al. (2014), which is analogous to the Nesterov’s acceleration scheme.
5 OPTIMIZATION PROBLEMS IN THE FIELD OF DEEP LEARNING
In this section, we present how optimization problems are set up in the field of deep learning. Let x = {xj}n−1j=0 and y = {yj} n−1 j=0 denote training datasets of input and output, respectively, where n is the size of dataset. Let dx and dy be dimensions of input data and output data, respectively. We write ϕnn for neural networks with parameters w ∈ RN , and define ϕ by the direct sum of vectors {ϕj} given by ϕj(w) = ϕnn(xj , w), that is, ϕ = ⊕ϕj . Note that M = n × dy in this case. Then finding a minima of a given loss function is proposed as a standard optimization problem to train networks.
For the image classification tasks, there are two typical cases of setting loss functions. In the first case, the loss is set as (14). As already mentioned, in this case, F = ϕ and HL = I . In the second case, the loss is given by cross entropy with softmax function, that is,
L(ϕ) = 1
n n−1∑ j=0 yj · ln (θj) , (27)
where θ denotes the softmax function and θj = θ(ϕj). In this case, F is expressed by the direct sum such that F = ⊕Fj with
Fj = 1
n (sjθj − yj) , j = 0, . . . , n− 1, (28)
where sj denotes a sum of all elements in vector yj for each j. Note that if each yj is given as a probability vector, then sj = 1. Moreover, HL is expressed as HL = ⊕Hj with
Hj = sj n (diag(θj)− θj ⊗ θj) , (29)
where ⊗ denotes the outer product. In both cases, the loss functions take the minimum value 0 if and only if F = 0.
6 EXPERIMENTS
In this study, we conducted following three groups of experiments; First, we examined optimization performance of the EDF-based methods on a data-fitting problem with a simple convolutional neural network model. Second, we tested both optimization and generalization performance in standard settings of classification tasks in which we employed residual network models (Resnet He et al. (2016)) and CIFAR-10/100 datasets. Third, we incorporated some techniques of data augmentation and regularization in the training into our experiment as we would like to measure effectiveness of our methods in more practical settings.
The primary purpose of the experiments in this paper is not to pursue the state-of-the-art performance, but to explore how the statistical results pertain to those of optimization when the loss functions are minimized. Therefore, we tuned hyperparameters of each method in accordance with the optimization benchmark.
It should be noted that the conjecture concerning non-convex optimization problems in the field of deep learning (Baldi & Hornik (1989), Choromanska et al. (2015)) is still an open problem (studied for linear cases in Baldi & Lu (2012), Kawaguchi (2016)). Hence, for experiments in this paper, we do not discuss whether each optimization method actually reaches to a global solution.
6.1 OPTIMIZATION PERFORMANCE ON A STANDARD DATA-FITTING PROBLEM
We evaluated convergence performance of EDF-based methods (type G and type H) on a data-fitting problem of CIFAR-10, that is, full-batch training in the context of deep learning. The model we employed in these experiments was the convolutional neural network that consisted of two convolution filters with rectified linear units and max pooling layers, followed by two fully connected layers. For EDF-based methods, the step size was fixed to 1.0, and no damping factors were used. In addition, the acceleration technique derived from W2 of (25) was adapted, since W2 achieved better performance than W1. A similar experiment with a different type of second-order methods was conducted in Sohl-Dickstein et al. (2014).
First, we examined change in convergence performance of EDF-G depending on hyperparameter k, the order of Krylov subspace. The results are illustrated in the left-hand side of Figure 1. The presented trend is consistent with the theoretical fact that k interpolates between the gradient method (for small k) and the method based on dynamics (18) (for large k). In other words, when k is small, the method is similar to a gradient method, which converges slow, but as k becomes larger, the method leads to better approximation of the inverse matrix, which gives a rapidly decaying flow.
Next, we compared EDF-G (k = 1, 30) with EDF-H and other standard optimizers in deep learning: gradient descent methods with Polyaks momentum scheme (Momentum) and Nesterov’s acceleration scheme (NAG), and Adam. The step sizes for Momentum, NAG, and Adam were fixed to 0.01, 0.001, and 0.001, respectively.
The right-hand side of Figure 1 shows that both EDF-based methods made the loss decrease more rapidly than other standard optimizers. Even when EDF-based methods were reduced to the gradient methods at k = 1, EDF-G outperformed standard optimizers.
The figure also presents the difference between EDF-G and EDF-H. While their convergence rates around extremes were almost the same, the overall convergence rates of EDF-G were better than that of EDF-H.
As has been found, second-order-methods on full-batch training converge to the solution within a few iterations. However, with such a training, the quality of generalization becomes worse compared to the time when stochastic approach with a mini-batch of small size is taken. Like other secondorder-methods, EDF also suffers from a negative impact of full-batch training on generalization performance, though setting the hyperparameter k small makes EDF be compatible with stochastic approaches and take their advantages (see Appendix B).
6.2 PERFORMANCES ON CLASSIFICATION TASKS IN DEEP LEARNING
In the following experiments, we compared both optimization and generalization performance of EDF-G with other methods, momentum stochastic gradient decent method (MSGD) and Adam on classification tasks for CIFAR-10/100. The experiments were conducted, employing residual network models with batch normalization (Resnet-56 for CIFAR-10 and Resnet-110 for CIFAR-100), and working with a mini-batch of size 250.
For CIFAR-100, during pre-investigation of the dataset, we found that 9 pairs of data, each of which has the same image but different labels, contaminated the neural network and might have had a non-negligible influence on both optimization and generalization performance (See Appendix C). Therefore, in our experiments on CIFAR-100, we excluded them from the training dataset and used the rest of the data (size = 49982).
At the end of each epoch, we computed training loss in the following manner. First, the loss was defined as (27) where n is the total number of data. Second, to calculate the statistics for batch normalization in the loss, as described in Ioffe & Szegedy (2015), we adopted so-called ”inference mode” in which moving averages of mean and variance over mini-batches are used for batch-normalization.
For EDF, we tested the case k = 1 and k = 2, with the damping factor set as a = b = 1. The momentum coefficient was fixed to α = 0.9 for the W1 acceleration (see 25).
For each of the tasks, we ran tests on 10 different learning rates; for EDF between 0.1 and 2.0, for Momentum SGD between 0.1 and 10.0, for Adam between 0.0001 and 0.1. We then chose the one with the best convergence performance for each optimizer. For EDF with W2, to obtain stability in convergence, the learning rate was set to 0.2 times that of the initial values at the end of the 20-th epoch. Such a change in learning rate in the middle of optimization did not bring advantages to either optimization or generalization performances for other methods including EDF with W1 (see Appendix D).
The results are presented in Figures 2 and 3. As shown in the figures, with respect to the optimization performance, EDF reached an optimal solution with smaller error at higher convergence rate than Adam and Momentum SGD, even when k = 1, 2. Moreover, EDF overall attained better generalization performance than other methods.
6.3 PERFORMANCES WITH TECHNIQUES IN A MORE PRACTICAL SETTING
For classification tasks, generalization performance often improves by adopting several techniques, such as data augmentation and regularization. In this group of experiments, employing the data augmentation based on random horizontal flip and shift and the L2 regularization, we conducted comparisons between EDF and other methods that are similar to those in the previous sections.
When adopting these techniques, rate decay scheme has as large impact on optimization performance as learning rate itself. Because effectiveness of each rate decay scheme much depends on optimization methods, it is almost impossible to find the best scheme that is applicable to all the methods. For this reason, for MSGD and Adam, as initial learning rates, we chose one of the standard rates for MSGD and Adam, 0.1 and 0.001, respectively, and at the ends of the 100-th, 140-th, 180-th epochs, reset the rates to 0.1 times the rates at the moment. For EDF, we ran tests on two different rates 0.5 and 0.75, the optimal rates found in the experiments of Section 6.2, and reset to 0.2 times those of the initial values, only at the end of the 100-th epoch, in order to demonstrate performance of the EDF more clearly. Among the results obtained with EDF, we chose the ones with the best optimization performance.
The result of the comparison is presented in Figures 4 and 5. As can be seen, we found a condition in which EDF achieved better performance than other methods on optimization while achieving sufficient levels of generalization.
7 CONCLUSION
Obtaining good statistical results from limited available data is a critical goal in machine learning. To reach this goal, while developing an effective model is an essential approach, eliciting the best performance from the fixed model through optimization is important as well. In our study, to examine the performance of our optimization methods, Exponentially Decaying Flows (EDF) based methods, we explored their generalization properties pertaining to results of optimization. Our experiments showed that EDF-based methods are more likely to achieve optimal solutions which generalize the test data well than other standard optimizers are. Therefore, EDF-based methods are considered to be optimization methods that have a high potential in their application to various tasks, and thus, are worthwhile to be sophisticated through future studies.
A ALGORITHMS OF EDF-BASED METHODS
In terms of computation of the EDF-based methods with GPU, the Jacobian-vector-product can be carried out at almost the same cost as the gradient of loss function. In fact, multiplying a vector by Jacobian and its transposed matrix (written as R-op and L-op, respectively) are implemented in combination with gradients of scholar functions. For the psuedo-code for update scheme of the EDF-G with L/R-op, refer to Algorithm 1, and Algorithm 2 in particular case that k = 1.
Algorithm 1: Update scheme for EDF-G with non-preconditioned MINRES Input : wi, k, a, b, η Output: wi+1
Compute g = ∇L(ϕ) Set s = ‖g‖, λ = asb Initialize parameters for MINRES as β = s, ξ = β, γ0 = γ1 = 1, σ0 = σ1 = 0, v = v0 = 0, v1 = g, q0 = q1 = 0 for j = 1 to k do v1 = v1/β Compute u = JF v1 using R-op Compute u = JTϕ u using L-op u = u+ λg α = 〈v1, u〉 u = u− αv1 − βv0 v0 = v1, v1 = u δ = γ1α− γ0σ1β, ρ2 = σ1α+ γ0γ1β, ρ3 = σ0β β = ‖v1‖ ρ1 = √ δ2 + β2
γ0 = γ1, γ1 = δ/ρ1, σ0 = σ1, σ1 = β/ρ1 u = q0, q0 = q1, q1 = (1/ρ1)(v0 − ρ2q0 − ρ3u) v = v + γ1ξq1 ξ = −σ1ξ
end for wi+1 = wi − ηv
Algorithm 2: Update scheme for k = 1 Input : wi, a, b, η Output: wi+1
Compute g = ∇L(ϕ) Set s = ‖g‖, λ = asb Compute u = JF g using R-op Compute u = JTϕ u using L-op u = u+ λg c = 〈g, u〉/〈u, u〉, v = cg wi+1 = wi − ηv
B COMPARISON BETWEEN FULL-BATCH AND STOCHASTIC TRAININGS
Figure 6 shows the results of experiments using EDF for simple examples that compare a full-batch training with stochastic trainings. In this example, the convolutional network similar to that used in Section 6.1 was employed on MNIST. The curve labeled as ”EDF F” depicts the result of Full-batch training per step, and those labeled as ”EDF S” illustrate the results of stochastic trainings per epoch with a mini-batch of size 500.
Let us divide the dataset {x, y} = {xj , yj}n−1j=0 into distinct subsets of size p such that
{x, y} = l⋃
i=1
{ x(i), y(i) } , { x(i), y(i) } = {xj , yj}pi−1j=p(i−1), pl = n. (30)
Then, gradients and Jacobian-vector-multiplications are calculated as ∇L(ϕ) = ∑ i ∇L ( ϕ(i) ) , Gv = ∑ i JTϕ(i)JF (i)v, (31)
where ϕ(i) and F (i) are the subcomponents of ϕ and F , respectively, corresponding to the decomposition above. Thus, for a fixed k, if we set p as the same size as a mini-batch, then the computational cost per step of a full-batch training becomes almost the same as that per epoch of a mini-batch training.
C IRREGULAR DATA IN CIFAR-100
The dataset of CIFAR-100 includes 9 pairs of irregular data, each of which has the same image but different labels. These data are enumerated in Figure 7. For instance, the 8393-rd and the 36874-th images are the same, but they have different labels, ”girl (class 35)” and ”baby (class 2).”
Such pairs contaminate the training process. In fact, in our experiment, when the network model was optimized with the full dataset, one of the images in each 9 pairs above could not be classified correctly, which resulted in stagnated training accuracy at 99.982 %. Moreover, generalization also deteriorated when irregular data were contained. For the details of these results, see Figure 8.
D PERFORMANCES WITH THE RATE DECAY SCHEME
Figure 9 and Figure 10 present the optimization and generalization performance of each optimizer when the rate decay scheme was adopted to the experiment with the same setting as in Figure 2 and Figure 3. The rate decay scheme was that the learning rate was set to 0.2 times that of the initial values at the end of the 20-th epoch. | 1. What is the main contribution of the paper regarding optimization methods for deep learning?
2. What are the weaknesses of the paper, particularly in its mathematical analysis and experimental design?
3. Do you have any concerns about the novelty and relevance of the proposed approach in the field of deep learning?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
5. Are there any specific suggestions or recommendations for improving the paper or its methodology? | Review | Review
The first abstract reads, "The field of deep learning has been craving for an optimization method that shows outstanding property for both optimization and generalization", what does it mean to say that the field is "craving"? Yes, a rigorously (mathematically proven or empirically tested extensively) better algorithm (compared to the existing algorithm) to optimize the ERM problems that appear in training deep neural networks is always valuable. But I find that this paper is not delivering on both these ends. To summarize, the main idea of the paper is to view ERM optimization in continuous time to develop algorithm to train neural networks. Section 2 introduces geodesic flows with NO references which can easily mislead readers that the math is new. For a field that is known in mathematics for essentially essentiall hundreds of years, I'd expect that the section is filled with references. Here's one in the recent years that tries to give a good overview -- https://arxiv.org/abs/1609.03890.
Essentially Theorem 2.1 is a restatement of Gronwall's Lemma with the implications of Theorem 2.1 not clear at all. Yes, we can choose F(t) to be anything and get exponential convergence rate in the continuous time, but it is *not* true that it is possible to get such exponential convergence in the discrete time (which is where training happens eventually). The technical difficulty is in figuring out the right discretization that can achieve the maximum possible convergence rate. See https://arxiv.org/abs/1805.00521 for example. The authors Euler's explicit discretization which is not the best even in simple settings.
All the modification the authors propose in Section 3, 4 and 5 are generic and straight out of standard numerical linear algebra textbooks.
As far as I can see, the experiments are setup in a somewhat standard way with CIFAR 10/100. I think MSGD should be tested with mini-batch 64 not 250 since smaller batch size leads to better generalization performace as recent works indicate (See for example, https://openreview.net/forum?id=HyWrIgW0W). |
ICLR | Title
Exponentially Decaying Flows for Optimization in Deep Learning
Abstract
The field of deep learning has been craving for an optimization method that shows outstanding property for both optimization and generalization. We propose a method for mathematical optimization based on flows along geodesics, that is, the shortest paths between two points, with respect to the Riemannian metric induced by a non-linear function. In our method, the flows refer to Exponentially Decaying Flows (EDF), as they can be designed to converge on the local solutions exponentially. In this paper, we conduct experiments to show its high performance on optimization benchmarks (i.e., convergence properties), as well as its potential for producing good machine learning benchmarks (i.e., generalization properties).
1 INTRODUCTION
Due to recent progress in the field of machine learning, it becomes more and more important to develop and sophisticate methods of solving hard optimization problems. At the same time, in this field, such methods are additionally required to elicit decent generalization performance from statistical models. An efficient method of mathematical optimization, however, does not always produce sufficient generalization properties, since these are involved with two distinct mathematical problems; The former is to find one of the solutions which minimize a given (possibly non-convex) objective function, and the latter is to adjust parameters so that a statistical estimator achieves its best result. To address such a hard issue, we introduce a new mathematical perspective on optimization, and develop a method for machine learning based on this perspective. We then empirically show its rapid convergence rate and high compatibility with deep learning techniques, as well as good statistical properties.
In this field, many optimization methods have been proposed and modified so that they fit specific problems or models. One of the current standard methods is the gradient descent method. The method tends to converge slowly in general optimization problems. However, with various specific techniques, such as mini-batch training and batch normalization (Ioffe & Szegedy (2015)), it has been found to be efficient for state-of-the-art purposes in the field of deep learning. Another class of methods that are now becoming popular and standard is adaptive methods, such as AdaGrad (Duchi et al. (2011)) and Adam (Kingma & Ba (2015)). Compared to the gradient descent method, these methods have been shown to improve convergence rates with almost the same computational cost as the gradient descent method, but are reported to result in poor statistical outcomes in some cases of machine learning (Wilson et al. (2017)).
Other class of methods that have been thoroughly studied in the theory of mathematical optimization is second-order methods, such as the Newton method and the Gauss-Newton method. These methods possess great convergence properties, and in particular, have a potential to overcome plateau’s problems (Dauphin et al. (2014)). Furthermore, when it comes to applications in stochastic settings, the method based on the Gauss-Newton Matrix (or Fisher information Matrix) is shown to asymptotically attain the best statistical result, which is called Fisher efficiency (see Amari (1998)). Despite these attractive characteristics, the methods have not yet been spotlighted in the field of machine learning due to several severe drawbacks; They suffer from high computational cost in general and their useful properties are no longer guaranteed in practical settings (see Section 12 in Martens (2014)). One of the continuously developing second-order methods in this field, K-FAC ( Ba et al. (2017), Grosse & Martens (2016) ), successfully produced high convergence rate empirically with relatively low computational cost. However, it still requires much effort to become compatible with
some deep learning techniques. In addition, it is unclear whether the method has advantages in generalization performance.
In our approach, by introducing a Riemannian metric induced by non-linear functions, we constitute dynamical systems which describe motions along the shortest route from arbitrary initial points to the zeros of non-linear functions on the corresponding Riemannian manifold, that is, geodesic with respect to the Riemannian metric. One of the remarkable characteristics of our approach is that it enables us to flexibly design flows of such dynamical systems to control convergence rates. The results for the flows are then applicable to mathematical optimization problems, in particular, with deep neural network (DNN) models. In this paper, after providing mathematical ground of our methods, we experimentally demonstrate their performance in various aspects, from convergence rates to statistical properties.
2 GEODESIC FLOWS FOR NON-LINEAR EQUATIONS
We start by establishing some essential properties of dynamics which are effective for the analysis of non-linear equations. Let F : RN → RN be a smooth function and J be the Jacobian with variable w ∈ RN , that is, J = ∂F/∂w. In this section, we deal with the well-posed case that there exists a connected closed subset Ω ⊂ RN , where J is regular and the equation has a unique solution ξ. Therefore, the positive matrix G = JTJ induces a Riemannian metric g on Ω and (Ω, g) becomes a Riemannian manifold under some appropriate conditions. Let us then consider time evolution of variable w on this manifold. Our main purpose in this section is to study the characteristics of dynamical systems in which w(t) moves on geodesics between any point in Ω and ξ with respect to the metric g.
Let L be a Lagrangian given by
L (w, v) = 1
2 vTG(w)v (1)
with v = dw/dt (also written as ẇ). The Euler-Lagrange equation for L is then expressed as
dp dt −∇wL = 0 (2)
with momentum vector p = Gv. If the boundary condition at two points in Ω, w(t0) = w0, w(t1) = w1, is imposed on (2), a geodesic between w0 and w1 is obtained as the solution. In contrast, if we give an appropriate initial condition, w describes the motion along the geodesic from w0 to w1. In fact, the following statement holds; Theorem 2.1. Let w0 be an arbitrary point in Ω and (w(t), p(t)) be a solution of equation (2) with the following initial condition;
w(0) = w0, p(0) = −J(w0)TF (w0). (3) Then w satisfies
F ( w(t) ) = (1− t)F ( w0 ) , (4)
for t ∈ [0, 1]. In particular, w(t) passes through the point which is a solution of non-linear equation F (w) = 0 at t = 1, that is, ξ = w(1).
We briefly describe the outline of the proof for the statement above. Throughout this paper, we regard functions of w as those of t in the trivial way; for instance, F (t) = F (w(t)). Note that p can be expressed as p = JT dF/dt. Then using the Beltrami identity for (2) with the initial condition above leads to the equation
d dt F = −F0, (5)
where F0 = F (0). Thus, a closed form expression is obtained as
F (t) = (1− t)F0, (6) which gives F (1) = 0 as asserted in Theorem 2.1.
Now we take a different expression that the coefficient (1 − t) in (6) is replaced by a different monotonically decreasing function, that is,
F (t) = ρ(t)F0, (7)
where ρ denotes a monotonically decreasing smooth function from (t0, t1) onto (0, 1). Then, we give the following differential equation whose solution is of the closed form (7);
d dt F = χ · F, χ(t) = d dt ln(ρ(t)). (8)
A motion described by this differential equation differs from the one that is described by (2), but these two motions are along the same geodesic. Theorem 2.2. Let w0 ∈ Ω be an arbitrary point. The differential equation
J dw
dt = χ · F, t ∈ (t0, t1) (9)
with an initial condition w0 = w(t0) has a unique solution that satisfies F (w(t1)) = 0. Furthermore, the orbit under flow f defined by f(w0, t) = w(t) coincides with that of the geodesic equation (2).
Note that since equation (9) is equivalent to equation (8), the orbit is invariant under coordinate transformations.
With respect to the choice of ρ, the end point t1 can be set as∞ under some appropriate conditions and Theorem 2.2 still holds in the sense that
lim t→∞ F (w(t)) = 0. (10)
In particular, if we set ρ(t) = e−t, then χ(t) = −1 and F can be represented as F (t) = e−tF0, so that the convergence rate of F is exponential. Definition 2.3. Let w be a solution of the differential equation
J dw
dt = −F, t ∈ (t0, t1) (11)
with an initial condition w0 = w(t0). The flow f(w0, t) = w(t) is called an exponentially decaying flow (EDF) of non-linear equation F (w) = 0.
For the end of this section, we present a connection between EDF and the classical Newton method. If we apply the Euler method to the differential equation (9) with step size 4t, the corresponding iteration step can be written as
wi+1 = wi +4t · χ(τi) · J(wi)−1F (wi), τi+1 = τi +4t, (12) which recovers the Newton method with step size ηi = 4t · χ(τi).
3 APPLICATION TO MATHEMATICAL OPTIMIZATIONS
Consider smooth functions ϕ : RN → RM and L : RM → R. In this section, we develop a method based on EDF for a mathematical optimization problem of the form
min w∈Ω L(ϕ(w)). (13)
In this section, the area Ω ⊂ RN is supposed to be a compact subset where there exists no stationary point except for a minima.
Let F : RN → RM denote derivative ∂L/∂ϕ. An example of such problems is least squares one in which L is given by
L(ϕ) = 1
2 ‖ϕ‖2. (14)
In this case, F = ϕ. In particular, if M = N and a minimal value of loss function L̂ = L ◦ ϕ is zero, the optimization problem is equivalent to solving non-linear equation F (w) = 0.
For optimization problems, we set up a standard equation that the gradient of loss function is zero, that is, ∇L̂(w) = 0. Note that the gradient can be expressed as ∇L̂(w) = JTϕ F (w) with Jacobian Jϕ = ∂ϕ/∂w. Applying Theorem 2.2, we obtain the differential equation
H dw
dt = χ · JTϕ F, (15)
where H is the Hessian of loss function L̂ with respect to w. Since second order differentiation is typically computationally heavy, we seek other equations which possess almost the same property as (15) especially in terms of asymptotic behavior.
Let us decompose momentum Hẇ as
H dw
dt =
d
dt
( JTϕ F ) =
( d
dt Jϕ
)T F +G dw
dt , (16)
where JF denotes the Jacobian of F , and G is a symmetric matrix defined by
G = JTϕ JF = J T ϕHLJϕ (17)
with Hessian matrix HL of L with respect to ϕ. We then consider the following equation instead of (15);
G dw
dt = χ · JTϕ F. (18)
This equation no longer describes the motion along the geodesic related to (15) in general. However, if M = N and Jϕ is invertible, then w moves on another geodesic with respect to a different metric GF = J T F JF . In addition, if ρ(t) = e
−t, F converges exponentially, which implies that∇L̂ = JϕF also converges exponentially in (18) as well as in (15). In general cases, if a condition that
d
dt
( 1
2 ‖F‖2
) = χ〈JFG−1JTϕ F, F 〉 ≤ χ‖F‖2 (19)
is satisfied, then F converges to 0. This shows that in the neighborhood of solution ξ of equation ∇L̂ = 0, the momentum Gẇ sufficiently approximates Hẇ by (16). Definition 3.1. The flow given by
H dw
dt = −JTϕ F (20)
is referred to EDF of type H and written as EDF-H. Similarly, the flow given by
G dw
dt = −JTϕ F (21)
is referred to EDF of type G and written as EDF-G.
4 MODIFICATION SCHEMES FOR EDF-BASED METHODS
Like second order methods, in EDF-based methods, matrix inversion has to be carried out, which requires expensive computational cost, particularly in large systems. Moreover, we often encounter rank deficient matrices in optimization problems. To deal with rank deficiency, in general, we need pseudo-inverse matrices which are more computationally heavy. Therefore, instead of the inverse of matrix A = G,H , for fixed v ∈ RM , we consider a projection which maps r = arg minx∈RM ‖Ax − v‖ to a vector Pk(A, v) in the k-th order Krylov subspace Kk(A, v) = span{v,Av,A2v, . . . , Ak−1v} such that r = P∞(A, v). One of the basic methods to construct such a projection for indefinite symmetric systems is the minimum residual method (Paige & Saunders (1975)), which requires only the matrix multiplication. Therefore, for numerical computations, we use the following differential equation approximated by the k-th order Krylov subspace;
dw dt = χ · Pk(A, JϕF ), A = G,H. (22)
k is a hyperparameter that interpolates between the gradient method and the method based on (15) or (18). In fact, in the case that k = 1, equation (22) has the form
dw dt = c · ∇L̂, c = χ 〈u,∇L̂〉 〈u, u〉 , u = A∇L̂, (23)
which reduces to a kind of gradient methods, but the coefficient c conveys information about G or H unlike the standard gradient methods.
Next, similar to the Levenberg-Marquardt algorithm, we modify equation (18) by adding a damping factor to G in order that the method becomes more stable. So, we set
(G+ λI) dw
dt = χ · JTϕ F, (24)
where λ is a continuous positive function of t. Then, we take the same approximation as (22) with A = G+λI . The damping factor λ plays several important roles in solving practical problems. First, it has solutions well-behaved near singularities. Even in the case that G is not invertible, equation (24) can be defined and solved unlike (18). If we choose λ such that it approaches to 0 as rapidly as the gradient in (18), the asymptotic behavior of (24) is almost the same as that of (18). Particularly, in the case that χ = −1, we set λ = a‖JTϕ F‖b with a, b > 0, so that the convergence rate of (24) stays exponential. Second, the damping factor makes the method compatible with stochastic approaches such as mini-batch training, in deep learning models. Since the orbit of (24) is affected by the gradient due to the damping factor λ, the method based on this equation could take advantage of stochastic approaches as other gradient methods do. (For implementation of the algorithm, see Appendix A.)
Finally, to accelerate the EDF-based methods, it is sometimes effective to change equation (18) into a second-order differential equation, particularly in the case in which the approximation method with k is applied. Specifically, we take the equation
G d2w dt2 + κ ·Gdw dt + JTϕ F = 0, (25)
where κ is a real-valued function of t. The convergence properties of the methods are controlled by the following differential equation;
d2F
dt2 + κ · dF dt + F = 0. (26)
There are two ways of determining κ. The first one is to set κ = αwith constant α (W1), which leads to a similar scheme to the momentum gradient decent method. The other one is to set κ(t) = αt−1 (W2). In this setting, the equation can be discretized in a similar way as described in Su et al. (2014), which is analogous to the Nesterov’s acceleration scheme.
5 OPTIMIZATION PROBLEMS IN THE FIELD OF DEEP LEARNING
In this section, we present how optimization problems are set up in the field of deep learning. Let x = {xj}n−1j=0 and y = {yj} n−1 j=0 denote training datasets of input and output, respectively, where n is the size of dataset. Let dx and dy be dimensions of input data and output data, respectively. We write ϕnn for neural networks with parameters w ∈ RN , and define ϕ by the direct sum of vectors {ϕj} given by ϕj(w) = ϕnn(xj , w), that is, ϕ = ⊕ϕj . Note that M = n × dy in this case. Then finding a minima of a given loss function is proposed as a standard optimization problem to train networks.
For the image classification tasks, there are two typical cases of setting loss functions. In the first case, the loss is set as (14). As already mentioned, in this case, F = ϕ and HL = I . In the second case, the loss is given by cross entropy with softmax function, that is,
L(ϕ) = 1
n n−1∑ j=0 yj · ln (θj) , (27)
where θ denotes the softmax function and θj = θ(ϕj). In this case, F is expressed by the direct sum such that F = ⊕Fj with
Fj = 1
n (sjθj − yj) , j = 0, . . . , n− 1, (28)
where sj denotes a sum of all elements in vector yj for each j. Note that if each yj is given as a probability vector, then sj = 1. Moreover, HL is expressed as HL = ⊕Hj with
Hj = sj n (diag(θj)− θj ⊗ θj) , (29)
where ⊗ denotes the outer product. In both cases, the loss functions take the minimum value 0 if and only if F = 0.
6 EXPERIMENTS
In this study, we conducted following three groups of experiments; First, we examined optimization performance of the EDF-based methods on a data-fitting problem with a simple convolutional neural network model. Second, we tested both optimization and generalization performance in standard settings of classification tasks in which we employed residual network models (Resnet He et al. (2016)) and CIFAR-10/100 datasets. Third, we incorporated some techniques of data augmentation and regularization in the training into our experiment as we would like to measure effectiveness of our methods in more practical settings.
The primary purpose of the experiments in this paper is not to pursue the state-of-the-art performance, but to explore how the statistical results pertain to those of optimization when the loss functions are minimized. Therefore, we tuned hyperparameters of each method in accordance with the optimization benchmark.
It should be noted that the conjecture concerning non-convex optimization problems in the field of deep learning (Baldi & Hornik (1989), Choromanska et al. (2015)) is still an open problem (studied for linear cases in Baldi & Lu (2012), Kawaguchi (2016)). Hence, for experiments in this paper, we do not discuss whether each optimization method actually reaches to a global solution.
6.1 OPTIMIZATION PERFORMANCE ON A STANDARD DATA-FITTING PROBLEM
We evaluated convergence performance of EDF-based methods (type G and type H) on a data-fitting problem of CIFAR-10, that is, full-batch training in the context of deep learning. The model we employed in these experiments was the convolutional neural network that consisted of two convolution filters with rectified linear units and max pooling layers, followed by two fully connected layers. For EDF-based methods, the step size was fixed to 1.0, and no damping factors were used. In addition, the acceleration technique derived from W2 of (25) was adapted, since W2 achieved better performance than W1. A similar experiment with a different type of second-order methods was conducted in Sohl-Dickstein et al. (2014).
First, we examined change in convergence performance of EDF-G depending on hyperparameter k, the order of Krylov subspace. The results are illustrated in the left-hand side of Figure 1. The presented trend is consistent with the theoretical fact that k interpolates between the gradient method (for small k) and the method based on dynamics (18) (for large k). In other words, when k is small, the method is similar to a gradient method, which converges slow, but as k becomes larger, the method leads to better approximation of the inverse matrix, which gives a rapidly decaying flow.
Next, we compared EDF-G (k = 1, 30) with EDF-H and other standard optimizers in deep learning: gradient descent methods with Polyaks momentum scheme (Momentum) and Nesterov’s acceleration scheme (NAG), and Adam. The step sizes for Momentum, NAG, and Adam were fixed to 0.01, 0.001, and 0.001, respectively.
The right-hand side of Figure 1 shows that both EDF-based methods made the loss decrease more rapidly than other standard optimizers. Even when EDF-based methods were reduced to the gradient methods at k = 1, EDF-G outperformed standard optimizers.
The figure also presents the difference between EDF-G and EDF-H. While their convergence rates around extremes were almost the same, the overall convergence rates of EDF-G were better than that of EDF-H.
As has been found, second-order-methods on full-batch training converge to the solution within a few iterations. However, with such a training, the quality of generalization becomes worse compared to the time when stochastic approach with a mini-batch of small size is taken. Like other secondorder-methods, EDF also suffers from a negative impact of full-batch training on generalization performance, though setting the hyperparameter k small makes EDF be compatible with stochastic approaches and take their advantages (see Appendix B).
6.2 PERFORMANCES ON CLASSIFICATION TASKS IN DEEP LEARNING
In the following experiments, we compared both optimization and generalization performance of EDF-G with other methods, momentum stochastic gradient decent method (MSGD) and Adam on classification tasks for CIFAR-10/100. The experiments were conducted, employing residual network models with batch normalization (Resnet-56 for CIFAR-10 and Resnet-110 for CIFAR-100), and working with a mini-batch of size 250.
For CIFAR-100, during pre-investigation of the dataset, we found that 9 pairs of data, each of which has the same image but different labels, contaminated the neural network and might have had a non-negligible influence on both optimization and generalization performance (See Appendix C). Therefore, in our experiments on CIFAR-100, we excluded them from the training dataset and used the rest of the data (size = 49982).
At the end of each epoch, we computed training loss in the following manner. First, the loss was defined as (27) where n is the total number of data. Second, to calculate the statistics for batch normalization in the loss, as described in Ioffe & Szegedy (2015), we adopted so-called ”inference mode” in which moving averages of mean and variance over mini-batches are used for batch-normalization.
For EDF, we tested the case k = 1 and k = 2, with the damping factor set as a = b = 1. The momentum coefficient was fixed to α = 0.9 for the W1 acceleration (see 25).
For each of the tasks, we ran tests on 10 different learning rates; for EDF between 0.1 and 2.0, for Momentum SGD between 0.1 and 10.0, for Adam between 0.0001 and 0.1. We then chose the one with the best convergence performance for each optimizer. For EDF with W2, to obtain stability in convergence, the learning rate was set to 0.2 times that of the initial values at the end of the 20-th epoch. Such a change in learning rate in the middle of optimization did not bring advantages to either optimization or generalization performances for other methods including EDF with W1 (see Appendix D).
The results are presented in Figures 2 and 3. As shown in the figures, with respect to the optimization performance, EDF reached an optimal solution with smaller error at higher convergence rate than Adam and Momentum SGD, even when k = 1, 2. Moreover, EDF overall attained better generalization performance than other methods.
6.3 PERFORMANCES WITH TECHNIQUES IN A MORE PRACTICAL SETTING
For classification tasks, generalization performance often improves by adopting several techniques, such as data augmentation and regularization. In this group of experiments, employing the data augmentation based on random horizontal flip and shift and the L2 regularization, we conducted comparisons between EDF and other methods that are similar to those in the previous sections.
When adopting these techniques, rate decay scheme has as large impact on optimization performance as learning rate itself. Because effectiveness of each rate decay scheme much depends on optimization methods, it is almost impossible to find the best scheme that is applicable to all the methods. For this reason, for MSGD and Adam, as initial learning rates, we chose one of the standard rates for MSGD and Adam, 0.1 and 0.001, respectively, and at the ends of the 100-th, 140-th, 180-th epochs, reset the rates to 0.1 times the rates at the moment. For EDF, we ran tests on two different rates 0.5 and 0.75, the optimal rates found in the experiments of Section 6.2, and reset to 0.2 times those of the initial values, only at the end of the 100-th epoch, in order to demonstrate performance of the EDF more clearly. Among the results obtained with EDF, we chose the ones with the best optimization performance.
The result of the comparison is presented in Figures 4 and 5. As can be seen, we found a condition in which EDF achieved better performance than other methods on optimization while achieving sufficient levels of generalization.
7 CONCLUSION
Obtaining good statistical results from limited available data is a critical goal in machine learning. To reach this goal, while developing an effective model is an essential approach, eliciting the best performance from the fixed model through optimization is important as well. In our study, to examine the performance of our optimization methods, Exponentially Decaying Flows (EDF) based methods, we explored their generalization properties pertaining to results of optimization. Our experiments showed that EDF-based methods are more likely to achieve optimal solutions which generalize the test data well than other standard optimizers are. Therefore, EDF-based methods are considered to be optimization methods that have a high potential in their application to various tasks, and thus, are worthwhile to be sophisticated through future studies.
A ALGORITHMS OF EDF-BASED METHODS
In terms of computation of the EDF-based methods with GPU, the Jacobian-vector-product can be carried out at almost the same cost as the gradient of loss function. In fact, multiplying a vector by Jacobian and its transposed matrix (written as R-op and L-op, respectively) are implemented in combination with gradients of scholar functions. For the psuedo-code for update scheme of the EDF-G with L/R-op, refer to Algorithm 1, and Algorithm 2 in particular case that k = 1.
Algorithm 1: Update scheme for EDF-G with non-preconditioned MINRES Input : wi, k, a, b, η Output: wi+1
Compute g = ∇L(ϕ) Set s = ‖g‖, λ = asb Initialize parameters for MINRES as β = s, ξ = β, γ0 = γ1 = 1, σ0 = σ1 = 0, v = v0 = 0, v1 = g, q0 = q1 = 0 for j = 1 to k do v1 = v1/β Compute u = JF v1 using R-op Compute u = JTϕ u using L-op u = u+ λg α = 〈v1, u〉 u = u− αv1 − βv0 v0 = v1, v1 = u δ = γ1α− γ0σ1β, ρ2 = σ1α+ γ0γ1β, ρ3 = σ0β β = ‖v1‖ ρ1 = √ δ2 + β2
γ0 = γ1, γ1 = δ/ρ1, σ0 = σ1, σ1 = β/ρ1 u = q0, q0 = q1, q1 = (1/ρ1)(v0 − ρ2q0 − ρ3u) v = v + γ1ξq1 ξ = −σ1ξ
end for wi+1 = wi − ηv
Algorithm 2: Update scheme for k = 1 Input : wi, a, b, η Output: wi+1
Compute g = ∇L(ϕ) Set s = ‖g‖, λ = asb Compute u = JF g using R-op Compute u = JTϕ u using L-op u = u+ λg c = 〈g, u〉/〈u, u〉, v = cg wi+1 = wi − ηv
B COMPARISON BETWEEN FULL-BATCH AND STOCHASTIC TRAININGS
Figure 6 shows the results of experiments using EDF for simple examples that compare a full-batch training with stochastic trainings. In this example, the convolutional network similar to that used in Section 6.1 was employed on MNIST. The curve labeled as ”EDF F” depicts the result of Full-batch training per step, and those labeled as ”EDF S” illustrate the results of stochastic trainings per epoch with a mini-batch of size 500.
Let us divide the dataset {x, y} = {xj , yj}n−1j=0 into distinct subsets of size p such that
{x, y} = l⋃
i=1
{ x(i), y(i) } , { x(i), y(i) } = {xj , yj}pi−1j=p(i−1), pl = n. (30)
Then, gradients and Jacobian-vector-multiplications are calculated as ∇L(ϕ) = ∑ i ∇L ( ϕ(i) ) , Gv = ∑ i JTϕ(i)JF (i)v, (31)
where ϕ(i) and F (i) are the subcomponents of ϕ and F , respectively, corresponding to the decomposition above. Thus, for a fixed k, if we set p as the same size as a mini-batch, then the computational cost per step of a full-batch training becomes almost the same as that per epoch of a mini-batch training.
C IRREGULAR DATA IN CIFAR-100
The dataset of CIFAR-100 includes 9 pairs of irregular data, each of which has the same image but different labels. These data are enumerated in Figure 7. For instance, the 8393-rd and the 36874-th images are the same, but they have different labels, ”girl (class 35)” and ”baby (class 2).”
Such pairs contaminate the training process. In fact, in our experiment, when the network model was optimized with the full dataset, one of the images in each 9 pairs above could not be classified correctly, which resulted in stagnated training accuracy at 99.982 %. Moreover, generalization also deteriorated when irregular data were contained. For the details of these results, see Figure 8.
D PERFORMANCES WITH THE RATE DECAY SCHEME
Figure 9 and Figure 10 present the optimization and generalization performance of each optimizer when the rate decay scheme was adopted to the experiment with the same setting as in Figure 2 and Figure 3. The rate decay scheme was that the learning rate was set to 0.2 times that of the initial values at the end of the 20-th epoch. | 1. What are the reviewer's main concerns regarding the paper's clarity and accuracy?
2. Which mathematical concepts used in the paper do the reviewer find confusing or unfamiliar to most ICLR audience?
3. Does the reviewer have any questions or concerns regarding the paper's theoretical results and their intuition?
4. How does the reviewer assess the realism and practicality of the assumptions made in the paper, such as the absence of stationary points except for a minimum?
5. What are the reviewer's comments and questions regarding the paper's notation usage, particularly in Section 5?
6. How does the reviewer evaluate the effectiveness and efficiency of the proposed method compared to other optimization methods, especially in terms of computational cost? | Review | Review
I found the paper poorly written, with many typos and incorrect formulations. It contains several sophisticated mathematical concepts (probably from differentiable geometry, dynamical systems and differential equations) that I believe a majority of ICLR audience would not be so familiar with; and these notions are not defined nor explained (e.g Riemannian metrics/manifold, Krylov subspaces). Section 2 presents some theoretical results and formal definition of exponentially decaying flows but I could not see the intuition or the goal of these results. In Section 3, for the main
optimization problem (13), it is assumed that "there exists no stationary point except for a minima." and so, I understand that there is only "a minima" or at least that all the local minima are assumed to have the same value, which not realistic for deep learning.
Comments and questions
Section 1:
After speaking of gradient descent methods, the authors say "Another class of methods... is adaptive methods such as AdaGrad and Adam": These are all based on stochastic gradient descent algorithm, so this distinction is surprising.
Section 2:
"Jacobian with variable w": does not make sense (although I can guess the author mean w.r.t.)
"J is regular and the equation has a unique solution": what is regular here? No equation has been mentioned before
"...becomes a Riemannian manifold under some appropriate conditions": which conditions?
Section 3:
"Applying Theorem 2.2, we obtain the differential equation": I can't see how applying Theorem 2.2 leads to a differential equation.
Section 4:
"we consider a projection which maps r to a vector P_k(A, v) in the k-th order Krylov subspace such that r = P_\infty(A, v)": I don't understand.
"Particularly, in the case that χ = −1, we set λ = akJ φ T F k b with a, b > 0, so that the convergence rate of (24) stays exponential": why is the case χ = −1 relevant to point out?
"Finally, to accelerate the EDF-based methods, it is sometimes effective to change equation (18) into a second-order differential equation": is it an empirical observation? or is there an explanation?
Section 5:
Again, so many notations (not all conventional: e.g. \theta denotes the softmax function), just to present a standard loss for a deep learning model.
Section 6:
"As has been found, second-order-methods on full-batch training converge to the solution within a few iterations.": Reference?
The method is compared to Nesterov accelerated gradient (NAG) for the data-fitting problem (fig 1) but not on the other tasks (figs 2 to 5).
In the end, it was still unclear to me what the training consists of with the introduced exponentially decaying flows. I understand that a differential equation was formulated but then how is it solved iteratively? What is the cost of solving it? In Section 6, different algorithms are compared in terms of number of steps. What about the computational cost? |
ICLR | Title
Exponentially Decaying Flows for Optimization in Deep Learning
Abstract
The field of deep learning has been craving for an optimization method that shows outstanding property for both optimization and generalization. We propose a method for mathematical optimization based on flows along geodesics, that is, the shortest paths between two points, with respect to the Riemannian metric induced by a non-linear function. In our method, the flows refer to Exponentially Decaying Flows (EDF), as they can be designed to converge on the local solutions exponentially. In this paper, we conduct experiments to show its high performance on optimization benchmarks (i.e., convergence properties), as well as its potential for producing good machine learning benchmarks (i.e., generalization properties).
1 INTRODUCTION
Due to recent progress in the field of machine learning, it becomes more and more important to develop and sophisticate methods of solving hard optimization problems. At the same time, in this field, such methods are additionally required to elicit decent generalization performance from statistical models. An efficient method of mathematical optimization, however, does not always produce sufficient generalization properties, since these are involved with two distinct mathematical problems; The former is to find one of the solutions which minimize a given (possibly non-convex) objective function, and the latter is to adjust parameters so that a statistical estimator achieves its best result. To address such a hard issue, we introduce a new mathematical perspective on optimization, and develop a method for machine learning based on this perspective. We then empirically show its rapid convergence rate and high compatibility with deep learning techniques, as well as good statistical properties.
In this field, many optimization methods have been proposed and modified so that they fit specific problems or models. One of the current standard methods is the gradient descent method. The method tends to converge slowly in general optimization problems. However, with various specific techniques, such as mini-batch training and batch normalization (Ioffe & Szegedy (2015)), it has been found to be efficient for state-of-the-art purposes in the field of deep learning. Another class of methods that are now becoming popular and standard is adaptive methods, such as AdaGrad (Duchi et al. (2011)) and Adam (Kingma & Ba (2015)). Compared to the gradient descent method, these methods have been shown to improve convergence rates with almost the same computational cost as the gradient descent method, but are reported to result in poor statistical outcomes in some cases of machine learning (Wilson et al. (2017)).
Other class of methods that have been thoroughly studied in the theory of mathematical optimization is second-order methods, such as the Newton method and the Gauss-Newton method. These methods possess great convergence properties, and in particular, have a potential to overcome plateau’s problems (Dauphin et al. (2014)). Furthermore, when it comes to applications in stochastic settings, the method based on the Gauss-Newton Matrix (or Fisher information Matrix) is shown to asymptotically attain the best statistical result, which is called Fisher efficiency (see Amari (1998)). Despite these attractive characteristics, the methods have not yet been spotlighted in the field of machine learning due to several severe drawbacks; They suffer from high computational cost in general and their useful properties are no longer guaranteed in practical settings (see Section 12 in Martens (2014)). One of the continuously developing second-order methods in this field, K-FAC ( Ba et al. (2017), Grosse & Martens (2016) ), successfully produced high convergence rate empirically with relatively low computational cost. However, it still requires much effort to become compatible with
some deep learning techniques. In addition, it is unclear whether the method has advantages in generalization performance.
In our approach, by introducing a Riemannian metric induced by non-linear functions, we constitute dynamical systems which describe motions along the shortest route from arbitrary initial points to the zeros of non-linear functions on the corresponding Riemannian manifold, that is, geodesic with respect to the Riemannian metric. One of the remarkable characteristics of our approach is that it enables us to flexibly design flows of such dynamical systems to control convergence rates. The results for the flows are then applicable to mathematical optimization problems, in particular, with deep neural network (DNN) models. In this paper, after providing mathematical ground of our methods, we experimentally demonstrate their performance in various aspects, from convergence rates to statistical properties.
2 GEODESIC FLOWS FOR NON-LINEAR EQUATIONS
We start by establishing some essential properties of dynamics which are effective for the analysis of non-linear equations. Let F : RN → RN be a smooth function and J be the Jacobian with variable w ∈ RN , that is, J = ∂F/∂w. In this section, we deal with the well-posed case that there exists a connected closed subset Ω ⊂ RN , where J is regular and the equation has a unique solution ξ. Therefore, the positive matrix G = JTJ induces a Riemannian metric g on Ω and (Ω, g) becomes a Riemannian manifold under some appropriate conditions. Let us then consider time evolution of variable w on this manifold. Our main purpose in this section is to study the characteristics of dynamical systems in which w(t) moves on geodesics between any point in Ω and ξ with respect to the metric g.
Let L be a Lagrangian given by
L (w, v) = 1
2 vTG(w)v (1)
with v = dw/dt (also written as ẇ). The Euler-Lagrange equation for L is then expressed as
dp dt −∇wL = 0 (2)
with momentum vector p = Gv. If the boundary condition at two points in Ω, w(t0) = w0, w(t1) = w1, is imposed on (2), a geodesic between w0 and w1 is obtained as the solution. In contrast, if we give an appropriate initial condition, w describes the motion along the geodesic from w0 to w1. In fact, the following statement holds; Theorem 2.1. Let w0 be an arbitrary point in Ω and (w(t), p(t)) be a solution of equation (2) with the following initial condition;
w(0) = w0, p(0) = −J(w0)TF (w0). (3) Then w satisfies
F ( w(t) ) = (1− t)F ( w0 ) , (4)
for t ∈ [0, 1]. In particular, w(t) passes through the point which is a solution of non-linear equation F (w) = 0 at t = 1, that is, ξ = w(1).
We briefly describe the outline of the proof for the statement above. Throughout this paper, we regard functions of w as those of t in the trivial way; for instance, F (t) = F (w(t)). Note that p can be expressed as p = JT dF/dt. Then using the Beltrami identity for (2) with the initial condition above leads to the equation
d dt F = −F0, (5)
where F0 = F (0). Thus, a closed form expression is obtained as
F (t) = (1− t)F0, (6) which gives F (1) = 0 as asserted in Theorem 2.1.
Now we take a different expression that the coefficient (1 − t) in (6) is replaced by a different monotonically decreasing function, that is,
F (t) = ρ(t)F0, (7)
where ρ denotes a monotonically decreasing smooth function from (t0, t1) onto (0, 1). Then, we give the following differential equation whose solution is of the closed form (7);
d dt F = χ · F, χ(t) = d dt ln(ρ(t)). (8)
A motion described by this differential equation differs from the one that is described by (2), but these two motions are along the same geodesic. Theorem 2.2. Let w0 ∈ Ω be an arbitrary point. The differential equation
J dw
dt = χ · F, t ∈ (t0, t1) (9)
with an initial condition w0 = w(t0) has a unique solution that satisfies F (w(t1)) = 0. Furthermore, the orbit under flow f defined by f(w0, t) = w(t) coincides with that of the geodesic equation (2).
Note that since equation (9) is equivalent to equation (8), the orbit is invariant under coordinate transformations.
With respect to the choice of ρ, the end point t1 can be set as∞ under some appropriate conditions and Theorem 2.2 still holds in the sense that
lim t→∞ F (w(t)) = 0. (10)
In particular, if we set ρ(t) = e−t, then χ(t) = −1 and F can be represented as F (t) = e−tF0, so that the convergence rate of F is exponential. Definition 2.3. Let w be a solution of the differential equation
J dw
dt = −F, t ∈ (t0, t1) (11)
with an initial condition w0 = w(t0). The flow f(w0, t) = w(t) is called an exponentially decaying flow (EDF) of non-linear equation F (w) = 0.
For the end of this section, we present a connection between EDF and the classical Newton method. If we apply the Euler method to the differential equation (9) with step size 4t, the corresponding iteration step can be written as
wi+1 = wi +4t · χ(τi) · J(wi)−1F (wi), τi+1 = τi +4t, (12) which recovers the Newton method with step size ηi = 4t · χ(τi).
3 APPLICATION TO MATHEMATICAL OPTIMIZATIONS
Consider smooth functions ϕ : RN → RM and L : RM → R. In this section, we develop a method based on EDF for a mathematical optimization problem of the form
min w∈Ω L(ϕ(w)). (13)
In this section, the area Ω ⊂ RN is supposed to be a compact subset where there exists no stationary point except for a minima.
Let F : RN → RM denote derivative ∂L/∂ϕ. An example of such problems is least squares one in which L is given by
L(ϕ) = 1
2 ‖ϕ‖2. (14)
In this case, F = ϕ. In particular, if M = N and a minimal value of loss function L̂ = L ◦ ϕ is zero, the optimization problem is equivalent to solving non-linear equation F (w) = 0.
For optimization problems, we set up a standard equation that the gradient of loss function is zero, that is, ∇L̂(w) = 0. Note that the gradient can be expressed as ∇L̂(w) = JTϕ F (w) with Jacobian Jϕ = ∂ϕ/∂w. Applying Theorem 2.2, we obtain the differential equation
H dw
dt = χ · JTϕ F, (15)
where H is the Hessian of loss function L̂ with respect to w. Since second order differentiation is typically computationally heavy, we seek other equations which possess almost the same property as (15) especially in terms of asymptotic behavior.
Let us decompose momentum Hẇ as
H dw
dt =
d
dt
( JTϕ F ) =
( d
dt Jϕ
)T F +G dw
dt , (16)
where JF denotes the Jacobian of F , and G is a symmetric matrix defined by
G = JTϕ JF = J T ϕHLJϕ (17)
with Hessian matrix HL of L with respect to ϕ. We then consider the following equation instead of (15);
G dw
dt = χ · JTϕ F. (18)
This equation no longer describes the motion along the geodesic related to (15) in general. However, if M = N and Jϕ is invertible, then w moves on another geodesic with respect to a different metric GF = J T F JF . In addition, if ρ(t) = e
−t, F converges exponentially, which implies that∇L̂ = JϕF also converges exponentially in (18) as well as in (15). In general cases, if a condition that
d
dt
( 1
2 ‖F‖2
) = χ〈JFG−1JTϕ F, F 〉 ≤ χ‖F‖2 (19)
is satisfied, then F converges to 0. This shows that in the neighborhood of solution ξ of equation ∇L̂ = 0, the momentum Gẇ sufficiently approximates Hẇ by (16). Definition 3.1. The flow given by
H dw
dt = −JTϕ F (20)
is referred to EDF of type H and written as EDF-H. Similarly, the flow given by
G dw
dt = −JTϕ F (21)
is referred to EDF of type G and written as EDF-G.
4 MODIFICATION SCHEMES FOR EDF-BASED METHODS
Like second order methods, in EDF-based methods, matrix inversion has to be carried out, which requires expensive computational cost, particularly in large systems. Moreover, we often encounter rank deficient matrices in optimization problems. To deal with rank deficiency, in general, we need pseudo-inverse matrices which are more computationally heavy. Therefore, instead of the inverse of matrix A = G,H , for fixed v ∈ RM , we consider a projection which maps r = arg minx∈RM ‖Ax − v‖ to a vector Pk(A, v) in the k-th order Krylov subspace Kk(A, v) = span{v,Av,A2v, . . . , Ak−1v} such that r = P∞(A, v). One of the basic methods to construct such a projection for indefinite symmetric systems is the minimum residual method (Paige & Saunders (1975)), which requires only the matrix multiplication. Therefore, for numerical computations, we use the following differential equation approximated by the k-th order Krylov subspace;
dw dt = χ · Pk(A, JϕF ), A = G,H. (22)
k is a hyperparameter that interpolates between the gradient method and the method based on (15) or (18). In fact, in the case that k = 1, equation (22) has the form
dw dt = c · ∇L̂, c = χ 〈u,∇L̂〉 〈u, u〉 , u = A∇L̂, (23)
which reduces to a kind of gradient methods, but the coefficient c conveys information about G or H unlike the standard gradient methods.
Next, similar to the Levenberg-Marquardt algorithm, we modify equation (18) by adding a damping factor to G in order that the method becomes more stable. So, we set
(G+ λI) dw
dt = χ · JTϕ F, (24)
where λ is a continuous positive function of t. Then, we take the same approximation as (22) with A = G+λI . The damping factor λ plays several important roles in solving practical problems. First, it has solutions well-behaved near singularities. Even in the case that G is not invertible, equation (24) can be defined and solved unlike (18). If we choose λ such that it approaches to 0 as rapidly as the gradient in (18), the asymptotic behavior of (24) is almost the same as that of (18). Particularly, in the case that χ = −1, we set λ = a‖JTϕ F‖b with a, b > 0, so that the convergence rate of (24) stays exponential. Second, the damping factor makes the method compatible with stochastic approaches such as mini-batch training, in deep learning models. Since the orbit of (24) is affected by the gradient due to the damping factor λ, the method based on this equation could take advantage of stochastic approaches as other gradient methods do. (For implementation of the algorithm, see Appendix A.)
Finally, to accelerate the EDF-based methods, it is sometimes effective to change equation (18) into a second-order differential equation, particularly in the case in which the approximation method with k is applied. Specifically, we take the equation
G d2w dt2 + κ ·Gdw dt + JTϕ F = 0, (25)
where κ is a real-valued function of t. The convergence properties of the methods are controlled by the following differential equation;
d2F
dt2 + κ · dF dt + F = 0. (26)
There are two ways of determining κ. The first one is to set κ = αwith constant α (W1), which leads to a similar scheme to the momentum gradient decent method. The other one is to set κ(t) = αt−1 (W2). In this setting, the equation can be discretized in a similar way as described in Su et al. (2014), which is analogous to the Nesterov’s acceleration scheme.
5 OPTIMIZATION PROBLEMS IN THE FIELD OF DEEP LEARNING
In this section, we present how optimization problems are set up in the field of deep learning. Let x = {xj}n−1j=0 and y = {yj} n−1 j=0 denote training datasets of input and output, respectively, where n is the size of dataset. Let dx and dy be dimensions of input data and output data, respectively. We write ϕnn for neural networks with parameters w ∈ RN , and define ϕ by the direct sum of vectors {ϕj} given by ϕj(w) = ϕnn(xj , w), that is, ϕ = ⊕ϕj . Note that M = n × dy in this case. Then finding a minima of a given loss function is proposed as a standard optimization problem to train networks.
For the image classification tasks, there are two typical cases of setting loss functions. In the first case, the loss is set as (14). As already mentioned, in this case, F = ϕ and HL = I . In the second case, the loss is given by cross entropy with softmax function, that is,
L(ϕ) = 1
n n−1∑ j=0 yj · ln (θj) , (27)
where θ denotes the softmax function and θj = θ(ϕj). In this case, F is expressed by the direct sum such that F = ⊕Fj with
Fj = 1
n (sjθj − yj) , j = 0, . . . , n− 1, (28)
where sj denotes a sum of all elements in vector yj for each j. Note that if each yj is given as a probability vector, then sj = 1. Moreover, HL is expressed as HL = ⊕Hj with
Hj = sj n (diag(θj)− θj ⊗ θj) , (29)
where ⊗ denotes the outer product. In both cases, the loss functions take the minimum value 0 if and only if F = 0.
6 EXPERIMENTS
In this study, we conducted following three groups of experiments; First, we examined optimization performance of the EDF-based methods on a data-fitting problem with a simple convolutional neural network model. Second, we tested both optimization and generalization performance in standard settings of classification tasks in which we employed residual network models (Resnet He et al. (2016)) and CIFAR-10/100 datasets. Third, we incorporated some techniques of data augmentation and regularization in the training into our experiment as we would like to measure effectiveness of our methods in more practical settings.
The primary purpose of the experiments in this paper is not to pursue the state-of-the-art performance, but to explore how the statistical results pertain to those of optimization when the loss functions are minimized. Therefore, we tuned hyperparameters of each method in accordance with the optimization benchmark.
It should be noted that the conjecture concerning non-convex optimization problems in the field of deep learning (Baldi & Hornik (1989), Choromanska et al. (2015)) is still an open problem (studied for linear cases in Baldi & Lu (2012), Kawaguchi (2016)). Hence, for experiments in this paper, we do not discuss whether each optimization method actually reaches to a global solution.
6.1 OPTIMIZATION PERFORMANCE ON A STANDARD DATA-FITTING PROBLEM
We evaluated convergence performance of EDF-based methods (type G and type H) on a data-fitting problem of CIFAR-10, that is, full-batch training in the context of deep learning. The model we employed in these experiments was the convolutional neural network that consisted of two convolution filters with rectified linear units and max pooling layers, followed by two fully connected layers. For EDF-based methods, the step size was fixed to 1.0, and no damping factors were used. In addition, the acceleration technique derived from W2 of (25) was adapted, since W2 achieved better performance than W1. A similar experiment with a different type of second-order methods was conducted in Sohl-Dickstein et al. (2014).
First, we examined change in convergence performance of EDF-G depending on hyperparameter k, the order of Krylov subspace. The results are illustrated in the left-hand side of Figure 1. The presented trend is consistent with the theoretical fact that k interpolates between the gradient method (for small k) and the method based on dynamics (18) (for large k). In other words, when k is small, the method is similar to a gradient method, which converges slow, but as k becomes larger, the method leads to better approximation of the inverse matrix, which gives a rapidly decaying flow.
Next, we compared EDF-G (k = 1, 30) with EDF-H and other standard optimizers in deep learning: gradient descent methods with Polyaks momentum scheme (Momentum) and Nesterov’s acceleration scheme (NAG), and Adam. The step sizes for Momentum, NAG, and Adam were fixed to 0.01, 0.001, and 0.001, respectively.
The right-hand side of Figure 1 shows that both EDF-based methods made the loss decrease more rapidly than other standard optimizers. Even when EDF-based methods were reduced to the gradient methods at k = 1, EDF-G outperformed standard optimizers.
The figure also presents the difference between EDF-G and EDF-H. While their convergence rates around extremes were almost the same, the overall convergence rates of EDF-G were better than that of EDF-H.
As has been found, second-order-methods on full-batch training converge to the solution within a few iterations. However, with such a training, the quality of generalization becomes worse compared to the time when stochastic approach with a mini-batch of small size is taken. Like other secondorder-methods, EDF also suffers from a negative impact of full-batch training on generalization performance, though setting the hyperparameter k small makes EDF be compatible with stochastic approaches and take their advantages (see Appendix B).
6.2 PERFORMANCES ON CLASSIFICATION TASKS IN DEEP LEARNING
In the following experiments, we compared both optimization and generalization performance of EDF-G with other methods, momentum stochastic gradient decent method (MSGD) and Adam on classification tasks for CIFAR-10/100. The experiments were conducted, employing residual network models with batch normalization (Resnet-56 for CIFAR-10 and Resnet-110 for CIFAR-100), and working with a mini-batch of size 250.
For CIFAR-100, during pre-investigation of the dataset, we found that 9 pairs of data, each of which has the same image but different labels, contaminated the neural network and might have had a non-negligible influence on both optimization and generalization performance (See Appendix C). Therefore, in our experiments on CIFAR-100, we excluded them from the training dataset and used the rest of the data (size = 49982).
At the end of each epoch, we computed training loss in the following manner. First, the loss was defined as (27) where n is the total number of data. Second, to calculate the statistics for batch normalization in the loss, as described in Ioffe & Szegedy (2015), we adopted so-called ”inference mode” in which moving averages of mean and variance over mini-batches are used for batch-normalization.
For EDF, we tested the case k = 1 and k = 2, with the damping factor set as a = b = 1. The momentum coefficient was fixed to α = 0.9 for the W1 acceleration (see 25).
For each of the tasks, we ran tests on 10 different learning rates; for EDF between 0.1 and 2.0, for Momentum SGD between 0.1 and 10.0, for Adam between 0.0001 and 0.1. We then chose the one with the best convergence performance for each optimizer. For EDF with W2, to obtain stability in convergence, the learning rate was set to 0.2 times that of the initial values at the end of the 20-th epoch. Such a change in learning rate in the middle of optimization did not bring advantages to either optimization or generalization performances for other methods including EDF with W1 (see Appendix D).
The results are presented in Figures 2 and 3. As shown in the figures, with respect to the optimization performance, EDF reached an optimal solution with smaller error at higher convergence rate than Adam and Momentum SGD, even when k = 1, 2. Moreover, EDF overall attained better generalization performance than other methods.
6.3 PERFORMANCES WITH TECHNIQUES IN A MORE PRACTICAL SETTING
For classification tasks, generalization performance often improves by adopting several techniques, such as data augmentation and regularization. In this group of experiments, employing the data augmentation based on random horizontal flip and shift and the L2 regularization, we conducted comparisons between EDF and other methods that are similar to those in the previous sections.
When adopting these techniques, rate decay scheme has as large impact on optimization performance as learning rate itself. Because effectiveness of each rate decay scheme much depends on optimization methods, it is almost impossible to find the best scheme that is applicable to all the methods. For this reason, for MSGD and Adam, as initial learning rates, we chose one of the standard rates for MSGD and Adam, 0.1 and 0.001, respectively, and at the ends of the 100-th, 140-th, 180-th epochs, reset the rates to 0.1 times the rates at the moment. For EDF, we ran tests on two different rates 0.5 and 0.75, the optimal rates found in the experiments of Section 6.2, and reset to 0.2 times those of the initial values, only at the end of the 100-th epoch, in order to demonstrate performance of the EDF more clearly. Among the results obtained with EDF, we chose the ones with the best optimization performance.
The result of the comparison is presented in Figures 4 and 5. As can be seen, we found a condition in which EDF achieved better performance than other methods on optimization while achieving sufficient levels of generalization.
7 CONCLUSION
Obtaining good statistical results from limited available data is a critical goal in machine learning. To reach this goal, while developing an effective model is an essential approach, eliciting the best performance from the fixed model through optimization is important as well. In our study, to examine the performance of our optimization methods, Exponentially Decaying Flows (EDF) based methods, we explored their generalization properties pertaining to results of optimization. Our experiments showed that EDF-based methods are more likely to achieve optimal solutions which generalize the test data well than other standard optimizers are. Therefore, EDF-based methods are considered to be optimization methods that have a high potential in their application to various tasks, and thus, are worthwhile to be sophisticated through future studies.
A ALGORITHMS OF EDF-BASED METHODS
In terms of computation of the EDF-based methods with GPU, the Jacobian-vector-product can be carried out at almost the same cost as the gradient of loss function. In fact, multiplying a vector by Jacobian and its transposed matrix (written as R-op and L-op, respectively) are implemented in combination with gradients of scholar functions. For the psuedo-code for update scheme of the EDF-G with L/R-op, refer to Algorithm 1, and Algorithm 2 in particular case that k = 1.
Algorithm 1: Update scheme for EDF-G with non-preconditioned MINRES Input : wi, k, a, b, η Output: wi+1
Compute g = ∇L(ϕ) Set s = ‖g‖, λ = asb Initialize parameters for MINRES as β = s, ξ = β, γ0 = γ1 = 1, σ0 = σ1 = 0, v = v0 = 0, v1 = g, q0 = q1 = 0 for j = 1 to k do v1 = v1/β Compute u = JF v1 using R-op Compute u = JTϕ u using L-op u = u+ λg α = 〈v1, u〉 u = u− αv1 − βv0 v0 = v1, v1 = u δ = γ1α− γ0σ1β, ρ2 = σ1α+ γ0γ1β, ρ3 = σ0β β = ‖v1‖ ρ1 = √ δ2 + β2
γ0 = γ1, γ1 = δ/ρ1, σ0 = σ1, σ1 = β/ρ1 u = q0, q0 = q1, q1 = (1/ρ1)(v0 − ρ2q0 − ρ3u) v = v + γ1ξq1 ξ = −σ1ξ
end for wi+1 = wi − ηv
Algorithm 2: Update scheme for k = 1 Input : wi, a, b, η Output: wi+1
Compute g = ∇L(ϕ) Set s = ‖g‖, λ = asb Compute u = JF g using R-op Compute u = JTϕ u using L-op u = u+ λg c = 〈g, u〉/〈u, u〉, v = cg wi+1 = wi − ηv
B COMPARISON BETWEEN FULL-BATCH AND STOCHASTIC TRAININGS
Figure 6 shows the results of experiments using EDF for simple examples that compare a full-batch training with stochastic trainings. In this example, the convolutional network similar to that used in Section 6.1 was employed on MNIST. The curve labeled as ”EDF F” depicts the result of Full-batch training per step, and those labeled as ”EDF S” illustrate the results of stochastic trainings per epoch with a mini-batch of size 500.
Let us divide the dataset {x, y} = {xj , yj}n−1j=0 into distinct subsets of size p such that
{x, y} = l⋃
i=1
{ x(i), y(i) } , { x(i), y(i) } = {xj , yj}pi−1j=p(i−1), pl = n. (30)
Then, gradients and Jacobian-vector-multiplications are calculated as ∇L(ϕ) = ∑ i ∇L ( ϕ(i) ) , Gv = ∑ i JTϕ(i)JF (i)v, (31)
where ϕ(i) and F (i) are the subcomponents of ϕ and F , respectively, corresponding to the decomposition above. Thus, for a fixed k, if we set p as the same size as a mini-batch, then the computational cost per step of a full-batch training becomes almost the same as that per epoch of a mini-batch training.
C IRREGULAR DATA IN CIFAR-100
The dataset of CIFAR-100 includes 9 pairs of irregular data, each of which has the same image but different labels. These data are enumerated in Figure 7. For instance, the 8393-rd and the 36874-th images are the same, but they have different labels, ”girl (class 35)” and ”baby (class 2).”
Such pairs contaminate the training process. In fact, in our experiment, when the network model was optimized with the full dataset, one of the images in each 9 pairs above could not be classified correctly, which resulted in stagnated training accuracy at 99.982 %. Moreover, generalization also deteriorated when irregular data were contained. For the details of these results, see Figure 8.
D PERFORMANCES WITH THE RATE DECAY SCHEME
Figure 9 and Figure 10 present the optimization and generalization performance of each optimizer when the rate decay scheme was adopted to the experiment with the same setting as in Figure 2 and Figure 3. The rate decay scheme was that the learning rate was set to 0.2 times that of the initial values at the end of the 20-th epoch. | 1. What are the issues with the paper's clarity and writing style?
2. What are the concerns regarding the contribution and novelty of the proposed method?
3. What are the errors in the paper's terminology and assumptions?
4. How can the authors improve the derivation and simplification of their approach?
5. What are the suggestions for improving the computation of the matrix and mentioning relevant literature?
6. How can the authors strengthen their convergence guarantees and discretization approach?
7. What are the questions regarding the method's ability to generalize to a stochastic setting?
8. What are the criticisms of the experimental design and hyperparameter choice? | Review | Review
The paper is overall unclear and not well-written. There are lots of typos, grammar mistakes, misuse of terminology,... that obscure the clarity of the paper. More importantly, the contribution is unclear to me, and I think the paper needs a major rewrite in order to comply with the type of standard we expect from an international publication in a machine learning venue. Below I give some comments that I hope will help the authors to revise their submission.
1) “In addition, it is unclear whether the method has advantages in generalization performance.“
There is some theoretical results that are worth citing such as
Bottou, L., & Bousquet, O. (2008). The tradeoffs of large scale learning. In Advances in neural information processing systems (pp. 161-168).
Under certain assumptions, second-order methods are known to generalize more poorly.
2) Page 2, “positive matrix G”
You mean positive-definite matrix since G is a covariance matrix. Positive and Positive-definite matrices are different concepts.
3) Assumption section 3
The authors make the assumption “there exists no stationary point except for a minima”. This is a rather strong assumption which of course does not apply do deep neural networks. The authors should discuss how to relax this assumption.
4) Derivation section 3
The authors drop the first term in the decomposition of the Hessian-momentum term without providing any justification. Why are you making this simplification? It is probably worth pointing out this resembles the Gauss-Newton approximation and you are left with a positive-definite approximation of the Hessian, therefore ignoring any negative curvature.
5) Computation of the matrix
“ One of the basic methods to construct such a projection... requires only the matrix multiplication”
I would also suggest mentioning that these methods usually construct sparse matrices such as tri-diagonal matrices, see e.g. Nocedal, J., & Wright, S. J. (2006). Numerical optimization 2nd.
6) Suggested approach
a) The whole derivation is rather unclear, the author seem to arrive to a second-order equation similar to the one derived for Nesterov accelerated gradient in Su et al. 2014. You should contrast how your equation differs from theirs.
b) What are the convergence guarantees for your approach? Consider deriving a proof of convergence as in Su et al. 2014 (at the very least I would expect an asymptotic result)
c) Second-order ODES are difficult to discretize (see discussion in Su et al.) and even if the continuous method is guaranteed to converge, the discretization procedure might not have such guarantees. You need to add a discussion about discretization and explain what approach you used in practice.
7) Deterministic vs Stochastic setting
The derivation presented in the main text assumes full gradients are computed but of course, in practice, one would only use mini-batches. The authors make some rather bold and unjustified claims regarding the ability of their method to generalize to a stochastic setting. In particular they claim “setting the hyperparameter k small makes EDF be compatible with stochastic
approaches and take their advantages”. This statement requires a solid justification. I do not believe this is true. If you add noise to your differential equation, you need to ensure the noise has certain properties (vanishing noise in the limit or bounded noise) if you want to guarantee convergence. I recommend the authors read the relevant literature, for instance:
Li, Q., Tai, C., et al. (2015). Stochastic modified equations and adaptive stochastic gradient algorithms. arXiv preprint arXiv:1511.06251.
Krichene, W., Bayen, A., and Bartlett, P. L. (2015). Accelerated mirror descent in continuous
and discrete time. In Advances in neural information processing systems, pages 2845–2853.
8) Experiments
Choice of hyper-parameters: the authors need to explain how they pick the hyper-parameters. Simply listing what values are used is not sufficient. You need to show you’ve tried different settings for the competing methods. You said
“The step sizes for Momentum, NAG, and Adam were fixed to 0.01, 0.001, and 0.001, respectively”. How did you pick these values? |
ICLR | Title
Learning to Efficiently Sample from Diffusion Probabilistic Models
Abstract
Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a powerful family of generative models that, yielding high-fidelity samples and competitive log-likelihoods across a range of domains, including image and speech synthesis. Key advantages of DDPMs include ease of training, in contrast to generative adversarial networks, and speed of generation, in contrast to autoregressive models. However, DDPMs typically require hundreds-to-thousands of steps to generate a high fidelity sample, making them prohibitively expensive for high dimensional problems. Fortunately, DDPMs allow trading generation speed for sample quality through adjusting the number of refinement steps during inference. Prior work has been successful in improving generation speed through handcrafting the time schedule through trial and error. We instead view the selection of the inference time schedules as an optimization problem, and show that, with a simple dynamic programming algorithm, one can find the log-likelihood-optimal discrete time schedules for any pre-trained DDPM. Our method exploits the fact that the evidence lower bound (ELBO) can be decomposed into separate KL divergence terms, and given any computation budget, we discover the time schedule that maximizes the training ELBO exactly. Our method is efficient, has no hyper-parameters of its own, and can be applied to any pre-trained DDPM with no retraining. We discover inference time schedules requiring as few as 32 refinement steps, while sacrificing less than 0.1 bits per dimension compared to the default 4,000 steps used on an ImageNet 64x64 model.
1 INTRODUCTION
Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a powerful class of generative models (Sohl-Dickstein et al., 2015; Ho et al., 2020). DDPMs model the data distribution through an iterative denoising process, and have been applied successfully to a variety of applications, including unconditional image generation (Song & Ermon, 2019; Ho et al., 2020; Song et al., 2021; Nichol & Dhariwal, 2021), shape generation (Cai et al., 2020), text-to-speech (Chen et al., 2021; Kong et al., 2020) and single image super-resolution (Saharia et al., 2021; Li et al., 2021).
DDPMs are easy to train, featuring a simple denoising objective (Ho et al., 2020) with noise schedules that successfully transfer across different models and datasets. This contrasts to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which require an inner-outer loop optimization procedure that often entails instability and requires careful hyperparameter tuning. DDPMs also admit a simple non-autoregressive inference process; this contrasts to autoregressive models with often prohibitive computational costs on high dimensional data. The DDPM inference process starts with samples from the corresponding prior noise distribution (e.g., standard Gaussian), and iteratively denoises the samples under the fixed noise schedule. However, DDPMs often need hundreds-tothousands of denoising steps (each involving a feedforward pass of a large neural network) to achieve strong results. While this process is still much faster than autoregressive models, this is still often computationally prohibitive, especially when modeling high dimensional data.
There has been much recent work focused on improving the sampling speed of DDPMs. WaveGrad (Chen et al., 2021) introduced a manually crafted schedule requiring only 6 refinement steps; however, this schedule seems to be only applicable to the vocoding task where there is a very strong conditioning signal. Denoising Diffusion Implicit Models (DDIMs) (Song et al., 2020) accelerate sampling from
pre-trained DDPMs by relying on a family of non-Markovian processes. They accelerate the generative process through taking multiple steps in the diffusion process. However, DDIMs sacrifice the ability to compute log-likelihoods. Nichol & Dhariwal (2021) also explored the use of ancestral sampling with a subsequence of the original denoising steps, trying both a uniform stride and other hand-crafted strides. San-Roman et al. (2021) improve few-step sampling further by training a separate model after training a DDPM to estimate the level of noise, and modifying inference to dynamically adjust the noise schedule at every step to match the predicted noise level.
All these fast-sampling techniques rely on a key property of DDPMs – there is a decoupling between the training and inference schedule. The training schedule need not be the same as the inference schedule, e.g., a diffusion model trained to use 1000 steps may actually use only 10 steps during inference. This decoupling characteristic is typically not found in other generative models. In past work, the choice of inference schedule was often considered a hyperpameter selection problem, and often selected via intuition or extensive hyperparmeter exploration (Chen et al., 2021). In this work, we view the choice of the timesteps of the inference schedule (which we just call an inference path) as an independent optimization problem, wherein we attempt to learn the best schedule. Our approach relies on the observation that we can solve this optimization problem with dynamic programming. Given a fixed budget of K refinement steps and a pre-trained DDPM, we find the set of timesteps that maximizes the corresponding evidence lower bound (ELBO). As an optimization objective, the ELBO has a key decomposability property: the total ELBO is the sum of individual KL terms, and for any two inference paths, if the timesteps (s, t) contiguously occur in both, they share a common KL term, therefore admitting memoization (see Section 4.1 for a precise definition).
Our main contributions are the following: • We introduce a method that that finds the likelihood-optimal inference paths with a simple
dynamic programming algorithm for all possible computation budgets of K refinement steps. The algorithm searches over T > K timesteps, only requiring O(T ) neural network forward passes. It only needs to be applied once to a pre-trained DDPM, does not require training or retraining a DDPM, and is applicable to both time-discrete and time-continuous DDPMs.
• We experiment with DDPM models from prior work. On both Lsimple CIFAR10 and Lhybrid ImageNet 64x64, we discover schedules which require only 32 refinement steps, yet sacrifice only 0.1 bits per dimension compared to their original counterparts with 1,000 and 4,000 steps, respectively.
• We show that our method can be applied to any decomposable set of objectives. In particular, optimizing a reweighted ELBO can favourably bias our algorithm towards solutions with better FID scores, as we find that optimizing the exact variational lower bound may lead to worse FID scores, which is consistent with prior work on unconditional image generation.
2 BACKGROUND ON DENOISING DIFFUSION PROBABILISTIC MODELS
Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020; Sohl-Dickstein et al., 2015) are defined in terms of a forward Markovian diffusion process q and a learned reverse process pθ. The forward diffusion process gradually adds Gaussian noise to a data point x0 through T iterations,
q(x1:T | x0) = ∏T
t=1 q(xt | xt−1) , (1)
q(xt | xt−1) = N (xt | √ αt xt−1, (1− αt)I) , (2)
where the scalar parameters α1:T determine the variance of the noise added at each diffusion step, subject to 0 < αt < 1. The learned reverse process aims to model q(x0) by inverting the forward process, gradually removing noise from signal starting from pure Gaussian noise xT ,
p(xT ) = N (xT | 0, I) (3) pθ(x0:T ) = p(xT ) ∏T
t=1 pθ(xt−1 | xt) (4)
pθ(xt−1 | xt) = N (xt−1 | µθ(xt, t), σ2t I) . (5) The parameters of the reverse process can be optimized by maximizing the following variational lower bound on the training set:
Eq log p(x0) ≥ Eq [ log pθ(x0|x1)−
T∑ t=2 DKL ( q(xt−1|xt,x0)‖pθ(xt−1|xt) ) − LT (x0)
] (6)
where LT (x0) = DKL ( q(xT |x0) ‖ p(xT ) ) . Nichol & Dhariwal (2021) have demonstrated that training DDPMs by maximizing the ELBO yields competitive log-likelihood scores on both CIFAR10 and ImageNet 64×64 achieving 2.94 and 3.53 bits per dimension respectively. Two notable properties of Gaussian diffusion process that help formulate DDPMs tractably and efficiently include:
q(xt | x0) = N (xt | √ γt x0, (1− γt)I) , where γt = ∏t i=1 αi , (7)
q(xt−1 | x0,xt) = N ( xt−1 ∣∣∣ √γt−1 (1− αt)x0 +√αt (1− γt−1)xt 1− γt , (1− γt−1)(1− αt) 1− γt I ) .(8)
Given the marginal distribution of xt given x0 in (7), one can sample from the q(xt | x0) independently for different t and perform SGD on a randomly chosen KL term in (6). Furthermore, given that the posterior distribution of xt−1 given xt and x0 is Gaussian, one can compute each KL term in (6) between two Gaussians in closed form and avoid high variance Monte Carlo estimation.
3 LINKING DDPMS TO CONTINUOUS TIME AFFINE DIFFUSION PROCESSES
Before describing our approach to efficiently sampling from DDPMs, it is helpful to link DDPMs to continuous time affine diffusion processes, as it shows the compatibility of our approach to both time-discrete and time-continuous DDPMs (Song et al., 2021; Kingma et al., 2021). Let x0 ∼ q(x0) denote a data point drawn from the empirical distribution of interest and let q(xt|x0) denote a stochastic process for t ∈ [0, 1] defined through an affine diffusion process through the following stochastic differential equation (SDE):
dXt = fsde(t)Xtdt+ gsde(t)dBt , (9) where fsde, gsde : [0, 1]→ [0, 1] are integrable functions satisfying fsde(0) = 1 and gsde(0) = 0. Following Särkkä & Solin (2019) (section 6.1), we can compute the exact marginals q(xt|xs) for any 0 ≤ s < t ≤ 1. We get:
q(xt | xs) = N ( xt ∣∣∣ψ(t, s)xs,(∫ t s ψ(t, u)2g(u)2du ) I ) (10)
where ψ(t, s) = exp ∫ t s f(u)du. Since these integrals are difficult to work with, we instead propose (in parallel to Kingma et al. (2021)) to define the marginals directly: q(xt | x0) = N (xt | f(t)x0, g(t)2I) (11)
where f, g : [0, 1] → [0, 1] are differentiable, monotonic functions satisfying f(0) = 1, f(1) = 0, g(0) = 0, g(1) = 1. Then, by implicit differentiation it follows that the corresponding diffusion is
dXt = f ′(t)
f(t) Xtdt+
√ 2g(t) ( g′(t)− f ′(t)g(t)
f(t)
) dBt . (12)
We provide a proof for Equation 12 in the appendix (A.1). To complete our formulation, let fts = f(t) f(s) and gts = √ g(t)2 − f2tsg(s)2. Then, it follows that for any 0 < s < t ≤ 1 we have that
q(xt | xs) = N ( xt | ftsxs, g2tsI ) , (13)
q(xs | xt,x0) = N ( xs ∣∣∣ 1 g2t0 (fs0g 2 tsx0 + ftsg 2 s0xt), g2s0g 2 ts g2t0 I ) , (14)
We include proofs for (13) and (14) in the appendix (A.2). These equations show that we can perform inference with any ancestral sampling path (i.e., the timesteps can attain continuous values) by formulating the reverse process in terms of the posterior distribution as
pθ(xs | xt) = q ( xs | xt, x̂0 = 1ft0 (xt − gt0 θ(xt, t)) ) , (15)
justifying the compatibility of our main approach with time-continuous DDPMs. We note that this reverse process is also mathematically equivalent to a reverse process based on a time-discrete DDPM derived from a subsequence of the original timesteps as done by Song et al. (2020); Nichol & Dhariwal (2021). For the case of s = 0 in the reverse process, we follow the parametrization of Ho et al. (2020) to obtain discretized log likelihoods and compare our log likelihoods fairly with prior work.
Algorithm 1: Given a matrixL ∼ (T+1)×(T+1) of precomputed L(·, ·) terms, find the likelihoodoptimal schedules for all step budgets. def vectorized_dp_all_budgets(L): T = len(L) - 1 D = np.full(L.shape, -1) C = np.full(L.shape, np.inf) C[0, 0] = 0 for k in range(1, T + 1): bpds = C[k - 1, None] + L C[k] = np.amin(bpds, axis=-1) D[k] = np.argmin(bpds, axis=-1)
return D
Algorithm 2: Fetch the shortest path of K steps from the dynamic programming results implicitly returned by Algorithm 1. def fetch_shortest_path(D, K): optpath = [] t = K for k in reversed(range(K)): optpath.append(t) t = D[k, t]
return optpath
4 LEARNING TO EFFICIENTLY SAMPLE FROM DDPMS
We now introduce our dynamic programming (DP) approach. In general, after training a DDPM, one can use a different inference path than the one used during training. Additionally, one can optimize a loss or reward function with respect to the timesteps themselves after the DDPM is trained. In this paper, we use the ELBO as our loss function, however we note that it is possible to directly optimize the timesteps with other objectives.
4.1 OPTIMIZING THE ELBO
In our work, we choose to optimize ELBO as our objective. We rely on one key property of ELBOs, their decomposability. Before defining decomposability, we formally define a K-step inference path as a finite, monotonically increasing sequence of timesteps 0 = t′0 < t ′ 1 < ... < t ′ K−1 < t ′ K = 1. Now, given a set S ⊆ [0, 1], we define a family of lower bounds L of an “ideal” objective Lideal to be S-decomposable if:
1. There is a bijection from L to the set of all inference paths t with all timesteps in S, and 2. There exists a function L : S × S → [0,∞) such that, for all inference paths t with all
timesteps in S, Lideal ≥ ∑|t|−1 i=1 L(ti, ti−1) + C (C a constant).
We now show that DDPM ELBOs are decomposable. As shown by Song et al. (2020); Nichol & Dhariwal (2021) and the equations in Section 3, for any K and any K-step inference path t, there is a corresponding ELBO
− LELBO = EqDKL ( q(x1|x0)‖pθ(x1) ) + K∑ i=1 L(t′i, t ′ i−1) (16)
where
L(t, s) = { −Eq log pθ(xt|x0) s = 0 EqDKL ( q(xs|xt,x0)‖pθ(xs|xt) ) s > 0
(17)
Since all of these are lower bounds of Eq log p(x0), we conclude the family of DDPM evidence lower bounds is decomposable. Specifically, a DDPM trained on a set of timesteps S admits Sdecomposable ELBOs. For DDPMs trained with continuous timesteps, S = [0, 1]. For DDPMs trained on discrete timesteps, S is the set of those timesteps, as there is no guarantee that the behavior of the model won’t be pathological when give timesteps it has never seen during training. Now the question remains, given a fixed budget K steps, what is the optimal inference path?
First, we observe that any two paths that share a (t, s) transition will share a common L(t, s) term. We exploit this property in our dynamic programming algorithm. When given a grid S of plausible inference paths 0 = t0 < t1 < ... < tT−1 < tT = 1 with T ≥ K, it is possible to efficiently find the ELBO-optimal K-step inference path contained in S by memoizing all the individual L(t, s) ELBO terms for s, t ∈ {t0, ..., tT } with s < t. We can then solve the canonical least-cost-path problem on a directed graph where s→ t are nodes and the edge connecting them has cost L(t, s).
4.2 DYNAMIC PROGRAMMING ALGORITHM
We now outline our methodology to solve the least-cost-path problem. Our solution is similar to Dijkstra’s algorithm, but it differs to the classical least-cost-path problem where the latter is typically used, as our problem has additional constraints: we restrict our search to paths of exactly K + 1 nodes, and the start and end nodes are fixed.
Let C and D be (K + 1)× (T + 1) matrices. C[k, t] will be the total cost of the least-cost-path of length k from t to 0. D will be filled with the timesteps corresponding to such paths; i.e., D[k, t] will be the timestep s immediately previous to t for the optimal k-step path (assuming t is also part of such path).
We initialize C[0, 0] = 0 and all the other C[0, ·] to∞ (the D[0, ·] are irrelevant, but for ease of index notation we keep them in this section). Then, for each k from 1 to K, we iteratively set, for each t,
C[k, t] = min s
(C[k − 1, s] + L(t, s))
D[k, t] = argmin s
(C[k − 1, s] + L(t, s))
where L(t, s) is the cost to transition from t to s (see Equation 17). For all s ≥ t, we set L(t, s) =∞ (e.g., we only move backwards in the diffusion process). This procedure captures the shortest path cost in C and the shortest path itself in D. We further observe that running the DP algorithm for each k from 1 to T (instead of K), we can extract the optimal paths for all possible budgets K. Algorithm 1 illustrates a vectorized version of the procedure we have outlined in this section, while Algorithm 2 shows how to explicitly extract the optimal paths from D.
4.3 EFFICIENT MEMOIZATION
A priori, our dynamic programming approach appears to be inefficient because it requires computing O(T 2) terms (recall, as we rely on all the L(t, s) terms which depend on a neural network forward pass). We however observe that a single forward pass of the DDPM can be used to compute all the L(t, ·) terms. This holds true even in the case where the pre-trained DDPM learns the variances. For example, in Nichol & Dhariwal (2021) instead of fixing them to g̃ts = gtsgs0gt0 as we outlined in the previous section, the forward pass itself still only depends on t and not s, and the variance of pθ(xs|xt) is obtained by interpolating the forward pass’s output logits v with exp(v log g2ts + (1− v) log g̃2ts). Thus, computing the table of all the L(t, s) ELBO terms only requires O(T ) forward passes.
5 EXPERIMENTS
We apply our method on a wide variety of pre-trained DDPMs from prior work. This emphasizes the fact that our method is applicable to any pre-trained DDPM model. In particular, we rely the CIFAR10 model checkpoints released by Nichol & Dhariwal (2021) on both their Lhybrid and Lvlb objectives. We also showcase results on CIFAR10 (Krizhevsky et al., 2009) with the exact configuration used by Ho et al. (2020), which we denote as Lsimple, as well as Lhybrid on ImageNet 64x64 (Deng et al.,
2009) following Nichol & Dhariwal (2021), training these last two models from scratch for 800K and 3M steps, respectively, but otherwise using the exact same configurations as the authors.
In our experiments, we always search over a grid that includes all the timesteps used to train the model, i.e., {t/T : t ∈ {1, ..., T − 1}}. For our CIFAR10 results, we computed the memoization tables with Monte Carlo estimates over the full training dataset, while on ImageNet 64x64 we limited the number of datapoints in the Monte Carlo estimates to 16,384 images on the training dataset.
For each pre-trained model, we compare the negative log likelihoods (estimated using the full heldout dataset) of the strides discovered by our dynamic programming algorithm against even and quadratic strides, following Song et al. (2020). We find that our dynamic programming algorithm discovers strides resulting in much better log likelihoods than the hand-crafted strides used in prior work, particularly in the few-step regime. We provide a visualization of the log likelihood curves as a function of computation budget in Figure 1 for Lsimple CIFAR10 and Lhybrid ImageNet 64x64 (Deng et al., 2009), a full list of the scores in the few-step regime in Table 1, and a visualization of the discovered steps themselves in Figure 2.
5.1 COMPARISON WITH FID
We further evaluate our discovered strides by reporting FID scores (Heusel et al., 2017) on 50,000 model samples against the same number of samples from the training dataset, as is standard in the literature. We find that, although our strides are yield much better log likelihoods, such optimization does not necessarily translate to also improving the FID scores. Results are included in Figure 3. This weakened correlation between log-likehoods and FID is consistent with observations in prior work (Ho et al., 2020; Nichol & Dhariwal, 2021).
To remedy this issue, we show that despite our focus in this work being likelihood, we can significantly improve the FID scores discovered by our method simply by optimizing a reweighted ELBO. Recall that, as discussed in Section 4, our proposed method is compatible with any decomposable objective. Moreover, prior work has shown that the choice of ELBO weights has a significant effect on sample quality (Ho et al., 2020; Durkan & Song, 2021), and that choosing weights corresponds to choosing an equally valid variational lower bound of the data for a DDIM (Song et al., 2020). Similarly to prior work in the VAE literature, we thus stumble upon an open problem where different variational lower bounds compatible with the model (even with the same bits/dim) can lead to samples with different qualitative charachteristics (Alemi et al., 2018). As our focus is likelihood, we leave this research question of finding the weights / ELBO that lead to most agreement with FID for future work, but nevertheless identify one such choice that favourably biases our algorithm toward this front. Details about our construction of weights and results are included in the appendix (A.3).
5.2 MONTE CARLO ABLATION
To investigate the feasibility of our approach using minimal computation, we experimented with setting the number of Monte Carlo datapoints used to compute the dynamic programming table of negative log likelihood terms to 128 samples (i.e., easily fit into a single batch of GPU memory). We find that, for CIFAR10, the difference in log likelihoods is negligible, while on ImageNet 64x64 there is a visible yet slight improvement in negative log likelihood when filling the table with more samples. We hypothesize that this is due to the higher diversity of ImageNet. Nevertheless, we highlight that our procedure can be applied very quickly (i.e., with just T forward passes of a neural network when using a single batch, as opposed to a running average over batches), even for large models, to significantly improve log their likelihoods in the few-step regime.
6 RELATED WORK
DDPMs (Ho et al., 2020) have recently shown results that are competitive with GANs (Goodfellow et al., 2014), and they can be traced back to the work of Sohl-Dickstein et al. (2015) as a restricted family of deep latent variable models. Dhariwal & Nichol (2021) have more recently shown that DDPMs can outperform GANs in FID scores (Heusel et al., 2017). Song & Ermon (2019) have also linked DDPMs to denoising score matching (Vincent et al., 2008; 2010), which is crucial to the continuous-time formulation (Song et al., 2021; Kingma et al., 2021). More recent work on the few-step regime of DDPMs (Song et al., 2020; Chen et al., 2021; Nichol & Dhariwal, 2021; San-Roman et al., 2021; Kong & Ping, 2021; Jolicoeur-Martineau et al., 2021) has also guided our research efforts. DDPMs are also very closely related to variational autoencoders (Kingma & Welling,
2013), where more recent work has shown that, with many stochastic layers, they can also attain competitive negative log likelihoods in unconditional image generation (Child, 2020). Also very closely related to DDPMs, there has also been work on non-autoregressive modeling of text sequences that can be regarded as discrete-space DDPMs with a forward process that masks or remove tokens (Lee et al., 2018; Gu et al., 2019; Stern et al., 2019; Chan et al., 2020; Saharia et al., 2020). The UNet architecture (Ronneberger et al., 2015) has been key to the recent success of DDPMs, and as shown by Ho et al. (2020); Nichol & Dhariwal (2021), augmenting UNet with self-attention (Shaw et al., 2018) in scales where attention is computationally feasible has helped bring DDPMs closer to the current state-of-the-art autoregressive generative models (Child et al., 2019; Jun et al., 2020; Roy et al., 2021).
7 CONCLUSION AND DISCUSSION
By regarding the selection of the inference schedule as an optimization problem, we present a novel and efficient approach to discover a likelihood-optimal inference schedule for a pre-trained DDPM with a simple dynamic programming algorithm. Our method need only be applied once to discover the schedule, and does not require re-training the DPPM. In the few-step regime, we discover schedules on Lsimple CIFAR10 and Lhybrid ImageNet 64x64 that require only 32 steps, yet sacrifice≤ 0.1 bits per dimension compared to state-of-the-art DDPMs using hundreds-to-thousands of refinement steps. Our approach only needs forward passes of the DDPM neural network to fill the dynamic programming table of L(t, s) terms, and we show that we can fill the dynamic programming table with just O(T ) forward passes. Moreover, we can estimate the table using only 128 Monte Carlo samples, finding this to be sufficient even for datasets such as ImageNet with high diversity. Our method achieves strong likelihoods with very few refinement steps, outperforming prior work utilizing hand-crafted strides (Ho et al., 2020; Nichol & Dhariwal, 2021).
Despite very strong log-likelihood results, we observe that maximizing the unweighted ELBO can actually lead to higher (worse) FID scores, and on ImageNet 64x64, a decrease in sample quality for the smallest budgets K ∈ {32, 64}; this is consistent with findings in prior work (Ho et al., 2020; Nichol & Dhariwal, 2021). Nevertheless, our method is compatible with any decomposable objective such as reweighted variational lower bounds, and we show that a simple choice of reweighted ELBO (or equivalently a choice of DDIM) can remedy this issue. Developing principled methods to choose variational lower bounds or other decomposable metrics that correlate best with image quality is thus an important direction for future research. Finally, we remark that likelihood optimization itself is useful for specific applications: as well as better compression, in domains such as non-autoregressive text generation where likelihood correlates much better with sample quality and where diffusion models are starting to make progress (Austin et al., 2021), our method has the potential to improve sampling speed with far less cost in generation fidelity and without such adaptations.
REPRODUCIBILITY STATEMENT
We will fully open source our work and provide code pointers in the paper in the camera-ready version. Nevertheless, we provide pseudocode with a complete implementation of our proposed algorithm to maximize ease of reproducibility while we work on open-sourcing our work (see Algorithms ?? and ??). Since we experiment with open-sourced datasets and pre-trained models that already have publicly available checkpoints, our work is fully reproducible. We additionally emphasize that our method has no hyperparameters of its own.
ETHICS STATEMENT
Innovations in generative models have the potential to enable harmful and unethical applications. In applications where no harm is intended, bias and other failure modes of generative models and datasets used to train them can also lead to issues with fairness, discrimination, and other forms of harm. While our work is focused on making diffusion models more efficient, we believe its public release will not cause any form of immediate harm, as much more efficient generative models for images like GANs can still achieve high sample quality at a fraction of the speed of diffusion models.
A APPENDIX
A.1 PROOF FOR EQUATION 12
From Equation 10, we get by implicit differentiation that
f(t) = ψ(t, 0) = exp (∫ t 0 fsde(u)du ) ⇒f ′(t) = exp
(∫ t 0 fsde(u)du ) d dt ∫ t 0 fsde(u)du = f(t)fsde(t)
⇒fsde(t) = f ′(t)
f(t)
Similarly as above and also using the fact that ψ(t, s) = ψ(t,0)ψ(s,0) ,
g(t)2 = ∫ t 0 ψ(t, u)2gsde(u) 2du = ∫ t 0 f(t)2 f(u)2 gsde(u) 2du = f(t)2 ∫ t 0 gsde(u) 2 f(u)2 du
⇒2g(t)g′(t) = 2f(t)f ′(t) g(t) 2
f(t)2 + f(t)2
d
dt ∫ t 0 gsde(u) 2 f(u)2 du = 2fsde(t)g(t) 2 + gsde(t) 2
⇒gsde(t) = √ 2(g(t)g′(t)− fsde(t)g(t)2).
A.2 PROOF FOR EQUATIONS 13 AND 14
From Equation 10 and ψ(t, s) = ψ(t,0)ψ(s,0) it is immediate that fts is the mean of q(xt|xs). To show that g2ts is the variance of q(xt|xs), Equation 10 implies that
Var[xt|xs] = ∫ t s ψ(t, u)2gsde(u) 2du
= ∫ t 0 ψ(t, u)2gsde(u) 2du− ∫ s 0 ψ(t, u)2gsde(u) 2du
= g(t)2 − ψ(t, 0)2 ∫ s 0
ψ(s, u)2
ψ(s, u)2ψ(u, 0)2 gsde(u)
2du
= g(t)2 − ψ(t, 0)2 ∫ s 0 ψ(s, u)2 ψ(s, 0)2 gsde(u) 2du = g(t)2 − ψ(t, s)2g(s)2
= g(t)2 − ftsg(s)2.
The mean of q(xs|xt, x0) is given by the Gaussian conjugate prior formula (where all the distributions are conditioned on x0). Let µ = ftsxs, so we have a prior over µ given by
xs|x0 ∼ N (fs0x0, g2s0Id)⇒ µ|x0 ∼ N (fs0ftsx0, f2tsg2s0Id) ∼ N (ft0x0, f2tsg2s0Id),
and a likelihood with mean µ
xt|xs, x0 ∼ xt|xs ∼ N (ftsxs, g2tsId)⇒ xt|µ, x0 ∼ xt|µ ∼ N (µ, g2tsId).
Then it follows by the formula that µ|xt, x0 has variance
Var[µ|xt, x0] = ( 1
f2tsg 2 s0
+ 1
g2ts
)−1 = ( g2ts + f 2 tsg 2 s0
f2tsg 2 s0g 2 ts
)−1 = f2tsg 2 s0g 2 ts
g2ts + f 2 tsg 2 s0
⇒Var[xs|xt, x0] = 1
f2ts Var[µ|xt, x0] =
g2s0g 2 ts
g2ts + f 2 tsg 2 s0
= g2s0g 2 ts
g2t0 = g̃2ts
and mean E[µ|xt, x0] = ( 1
f2tsg 2 s0
+ 1
g2ts )−1( ft0x0 f2tsg 2 s0 + xt g2ts ) = ft0g 2 tsx0 + f 2 tsg 2 s0xt g2ts + f 2 tsg 2 s0 = ft0g 2 tsx0 + f 2 tsg 2 s0xt g2t0
⇒E[xs|xt, x0] = 1
fts E[µ|xt, x0] =
ft0 fts g2tsx0 + ftsg 2 s0xt
g2t0 = fs0g
2 tsx0 + ftsg 2 s0xt
g2t0 = f̃ts(xt, x0).
A.3 REWEIGHTED ELBO RESULTS
While the focus of our work is likelihood, we report FID scores for the sake of completeness, as well as to show the adaptability of our method via reweighting to focus on sample quality, as mentioned in Section 5.1.
To choose a reweighting scheme for each L(t, s) term that takes into account both t and s, Kingma et al. (2021) show that choosing discretized terms
Lw(t, s) = [ − ∫ t s w(u)SNR′(u)du ] ‖x0 − x̂0(xt, t)‖2 (18)
ensures that at the limit of infinite timesteps the continuous ELBO becomes − 12Eq ∫ 1 0 w(t)SNR′(t)‖x0 − x̂0(xt, t)‖2dt (where SNR(t) = f 2 t0 g2t0 , and a constant w(t) = 1 yields the unweighted ELBO).
While choosing w(t) = − SNR(t)SNR′(t) (which is Lsimple at the limit) did not work, we find that choosing w(t) = − 1SNR′(t) (which leads to a continuous objective of unweighted mean square errors and Lw(t, s) = (t− s)‖x0 − x̂0(xt, t)‖2) allows our algorithm to outperform DDPM FID scores and achieve similar scores to DDIM (Song et al., 2020). We call this “MSE reweighting” and include FID scores in Table 5, comparing to DDPM but also DDIM(η = 0) which does not admit likelihood computation but has been shown to be one of the strongest FID baselines in the few-step regime. The FID scores were estimated with 50,000 model and training data samples, as is standard in the literature.
A.4 NOTE ON SAMPLING STRATEGIES FOR FAIR COMPARISON
Finally, we discuss an alternative approach to that used in the figures of the paper for producing more qualitatively comparable samples. Given a fixed budget K, Figures 4 and 5 are produced to generate “comparable” samples across different strides by fixing all the standard Gaussian vectors in the sampling chain. However, another approach that allows to compare samples across different budgets is to fix a single Brownian motion trajectory on all T steps, and using discretizations (based on any stride) of this single random trajectory to generate the samples. We empirically find, however, that the former approach tends to produce much more similar images (see Figures 7 and 8 below). We suspect that the use of different strides (and hence different random directions from the fixed random trajectory) along with the very chaotic, non-linear behavior of DDPMs is the cause of this behavior. | 1. What is the focus of the paper regarding DDPM sampling?
2. What are the strengths and weaknesses of the proposed approach?
3. Do you have concerns about the relevance of a particular section in the paper?
4. Are there any questions or confusion regarding the method's application to continuous time steps?
5. Why did the authors choose to retrain some models instead of only testing with fixed pretrained models? | Summary Of The Paper
Review | Summary Of The Paper
This work presents a method to efficiently sample from a pre-trained DDPM by solving a dynamic programming problem that can maximize the log likelihood of the data samples given a fixed computational budget. This is done by defining a least-cost path problem to select a reduced set of time steps among a full grid of potential time steps across different possible step budget sizes, where the ELBO is used as the cost function. The authors show that their method can identify DDPM schedules that can achieve significantly higher log likelihood (i.e. lower bits/dim) than prior DDPM schedules in the regime where about a hundred steps or fewer are used.
Review
Strengths:
The dynamic programming problem identified by the authors is an elegant and efficient approach to address the sampling limitations of DDPMs. It is natural to frame the search for an optimal schedule as a Dynamic Programming problem, and the authors show this problem can be efficiently solved in linear rather than quadratic time.
The proposed method shows a significant improvement in model performance as measured by log likelihood compared to prior methods when applying a pre-trained DDPM over a greatly reduced set of time steps.
Weaknesses:
The main weakness of this work is that the method appears to overfit the ELBO objective without improving (and potentially reducing) the visual quality of generated samples. In particular, the proposed method can significantly improve the log likelihood over few-step diffusion paths compared to prior techniques. However, the Dynamic Programming step schedules can actually decrease the quality of visual appearance, as measured by FID, compared to previous methods. Personally, I consider FID to be a much more reliable indicator of model quality than the log likelihood, due to its sensitivity to small changes, ability to detect mode coverage, and the fact that FID is model-agnostic, while log likelihood can only be applied to models with a tractable density or ELBO. The authors acknowledge this limitation and explore efficient schedules for maintaining low FID/high visual quality, but these results do not improve upon prior methods. Thus, while the authors achieve their intended goal of efficient and high log likelihoods via their new method, the outcome might not be particularly meaningful since it doesn't really improve model/sample quality.
I am unsure of the relevance of Section 3. How does this fit into the presentation in Section 4? See "Other Comments" below.
Other Comments:
In Section 3, there is a claim that "These equations show that we can perform inference with any ancestral sampling path (i.e., the timesteps can attain continuous values)" but in Section 4, there is a claim that "For time-continuous DDPMs, the choice of grid (i.e., the
t
1
,
…
,
t
T
−
1
)
can be arbitrary. For models trained with discrete timesteps, the grid must be a subset of (or the full) original steps used during training." Why does the method not work for arbitrary continuous time steps if the model is trained with discrete time steps? The first claim makes it seems like that would be possible.
Why were some of the models used retrained, instead doing testing using only fixed pretrained models? |
ICLR | Title
Learning to Efficiently Sample from Diffusion Probabilistic Models
Abstract
Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a powerful family of generative models that, yielding high-fidelity samples and competitive log-likelihoods across a range of domains, including image and speech synthesis. Key advantages of DDPMs include ease of training, in contrast to generative adversarial networks, and speed of generation, in contrast to autoregressive models. However, DDPMs typically require hundreds-to-thousands of steps to generate a high fidelity sample, making them prohibitively expensive for high dimensional problems. Fortunately, DDPMs allow trading generation speed for sample quality through adjusting the number of refinement steps during inference. Prior work has been successful in improving generation speed through handcrafting the time schedule through trial and error. We instead view the selection of the inference time schedules as an optimization problem, and show that, with a simple dynamic programming algorithm, one can find the log-likelihood-optimal discrete time schedules for any pre-trained DDPM. Our method exploits the fact that the evidence lower bound (ELBO) can be decomposed into separate KL divergence terms, and given any computation budget, we discover the time schedule that maximizes the training ELBO exactly. Our method is efficient, has no hyper-parameters of its own, and can be applied to any pre-trained DDPM with no retraining. We discover inference time schedules requiring as few as 32 refinement steps, while sacrificing less than 0.1 bits per dimension compared to the default 4,000 steps used on an ImageNet 64x64 model.
1 INTRODUCTION
Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a powerful class of generative models (Sohl-Dickstein et al., 2015; Ho et al., 2020). DDPMs model the data distribution through an iterative denoising process, and have been applied successfully to a variety of applications, including unconditional image generation (Song & Ermon, 2019; Ho et al., 2020; Song et al., 2021; Nichol & Dhariwal, 2021), shape generation (Cai et al., 2020), text-to-speech (Chen et al., 2021; Kong et al., 2020) and single image super-resolution (Saharia et al., 2021; Li et al., 2021).
DDPMs are easy to train, featuring a simple denoising objective (Ho et al., 2020) with noise schedules that successfully transfer across different models and datasets. This contrasts to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which require an inner-outer loop optimization procedure that often entails instability and requires careful hyperparameter tuning. DDPMs also admit a simple non-autoregressive inference process; this contrasts to autoregressive models with often prohibitive computational costs on high dimensional data. The DDPM inference process starts with samples from the corresponding prior noise distribution (e.g., standard Gaussian), and iteratively denoises the samples under the fixed noise schedule. However, DDPMs often need hundreds-tothousands of denoising steps (each involving a feedforward pass of a large neural network) to achieve strong results. While this process is still much faster than autoregressive models, this is still often computationally prohibitive, especially when modeling high dimensional data.
There has been much recent work focused on improving the sampling speed of DDPMs. WaveGrad (Chen et al., 2021) introduced a manually crafted schedule requiring only 6 refinement steps; however, this schedule seems to be only applicable to the vocoding task where there is a very strong conditioning signal. Denoising Diffusion Implicit Models (DDIMs) (Song et al., 2020) accelerate sampling from
pre-trained DDPMs by relying on a family of non-Markovian processes. They accelerate the generative process through taking multiple steps in the diffusion process. However, DDIMs sacrifice the ability to compute log-likelihoods. Nichol & Dhariwal (2021) also explored the use of ancestral sampling with a subsequence of the original denoising steps, trying both a uniform stride and other hand-crafted strides. San-Roman et al. (2021) improve few-step sampling further by training a separate model after training a DDPM to estimate the level of noise, and modifying inference to dynamically adjust the noise schedule at every step to match the predicted noise level.
All these fast-sampling techniques rely on a key property of DDPMs – there is a decoupling between the training and inference schedule. The training schedule need not be the same as the inference schedule, e.g., a diffusion model trained to use 1000 steps may actually use only 10 steps during inference. This decoupling characteristic is typically not found in other generative models. In past work, the choice of inference schedule was often considered a hyperpameter selection problem, and often selected via intuition or extensive hyperparmeter exploration (Chen et al., 2021). In this work, we view the choice of the timesteps of the inference schedule (which we just call an inference path) as an independent optimization problem, wherein we attempt to learn the best schedule. Our approach relies on the observation that we can solve this optimization problem with dynamic programming. Given a fixed budget of K refinement steps and a pre-trained DDPM, we find the set of timesteps that maximizes the corresponding evidence lower bound (ELBO). As an optimization objective, the ELBO has a key decomposability property: the total ELBO is the sum of individual KL terms, and for any two inference paths, if the timesteps (s, t) contiguously occur in both, they share a common KL term, therefore admitting memoization (see Section 4.1 for a precise definition).
Our main contributions are the following: • We introduce a method that that finds the likelihood-optimal inference paths with a simple
dynamic programming algorithm for all possible computation budgets of K refinement steps. The algorithm searches over T > K timesteps, only requiring O(T ) neural network forward passes. It only needs to be applied once to a pre-trained DDPM, does not require training or retraining a DDPM, and is applicable to both time-discrete and time-continuous DDPMs.
• We experiment with DDPM models from prior work. On both Lsimple CIFAR10 and Lhybrid ImageNet 64x64, we discover schedules which require only 32 refinement steps, yet sacrifice only 0.1 bits per dimension compared to their original counterparts with 1,000 and 4,000 steps, respectively.
• We show that our method can be applied to any decomposable set of objectives. In particular, optimizing a reweighted ELBO can favourably bias our algorithm towards solutions with better FID scores, as we find that optimizing the exact variational lower bound may lead to worse FID scores, which is consistent with prior work on unconditional image generation.
2 BACKGROUND ON DENOISING DIFFUSION PROBABILISTIC MODELS
Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020; Sohl-Dickstein et al., 2015) are defined in terms of a forward Markovian diffusion process q and a learned reverse process pθ. The forward diffusion process gradually adds Gaussian noise to a data point x0 through T iterations,
q(x1:T | x0) = ∏T
t=1 q(xt | xt−1) , (1)
q(xt | xt−1) = N (xt | √ αt xt−1, (1− αt)I) , (2)
where the scalar parameters α1:T determine the variance of the noise added at each diffusion step, subject to 0 < αt < 1. The learned reverse process aims to model q(x0) by inverting the forward process, gradually removing noise from signal starting from pure Gaussian noise xT ,
p(xT ) = N (xT | 0, I) (3) pθ(x0:T ) = p(xT ) ∏T
t=1 pθ(xt−1 | xt) (4)
pθ(xt−1 | xt) = N (xt−1 | µθ(xt, t), σ2t I) . (5) The parameters of the reverse process can be optimized by maximizing the following variational lower bound on the training set:
Eq log p(x0) ≥ Eq [ log pθ(x0|x1)−
T∑ t=2 DKL ( q(xt−1|xt,x0)‖pθ(xt−1|xt) ) − LT (x0)
] (6)
where LT (x0) = DKL ( q(xT |x0) ‖ p(xT ) ) . Nichol & Dhariwal (2021) have demonstrated that training DDPMs by maximizing the ELBO yields competitive log-likelihood scores on both CIFAR10 and ImageNet 64×64 achieving 2.94 and 3.53 bits per dimension respectively. Two notable properties of Gaussian diffusion process that help formulate DDPMs tractably and efficiently include:
q(xt | x0) = N (xt | √ γt x0, (1− γt)I) , where γt = ∏t i=1 αi , (7)
q(xt−1 | x0,xt) = N ( xt−1 ∣∣∣ √γt−1 (1− αt)x0 +√αt (1− γt−1)xt 1− γt , (1− γt−1)(1− αt) 1− γt I ) .(8)
Given the marginal distribution of xt given x0 in (7), one can sample from the q(xt | x0) independently for different t and perform SGD on a randomly chosen KL term in (6). Furthermore, given that the posterior distribution of xt−1 given xt and x0 is Gaussian, one can compute each KL term in (6) between two Gaussians in closed form and avoid high variance Monte Carlo estimation.
3 LINKING DDPMS TO CONTINUOUS TIME AFFINE DIFFUSION PROCESSES
Before describing our approach to efficiently sampling from DDPMs, it is helpful to link DDPMs to continuous time affine diffusion processes, as it shows the compatibility of our approach to both time-discrete and time-continuous DDPMs (Song et al., 2021; Kingma et al., 2021). Let x0 ∼ q(x0) denote a data point drawn from the empirical distribution of interest and let q(xt|x0) denote a stochastic process for t ∈ [0, 1] defined through an affine diffusion process through the following stochastic differential equation (SDE):
dXt = fsde(t)Xtdt+ gsde(t)dBt , (9) where fsde, gsde : [0, 1]→ [0, 1] are integrable functions satisfying fsde(0) = 1 and gsde(0) = 0. Following Särkkä & Solin (2019) (section 6.1), we can compute the exact marginals q(xt|xs) for any 0 ≤ s < t ≤ 1. We get:
q(xt | xs) = N ( xt ∣∣∣ψ(t, s)xs,(∫ t s ψ(t, u)2g(u)2du ) I ) (10)
where ψ(t, s) = exp ∫ t s f(u)du. Since these integrals are difficult to work with, we instead propose (in parallel to Kingma et al. (2021)) to define the marginals directly: q(xt | x0) = N (xt | f(t)x0, g(t)2I) (11)
where f, g : [0, 1] → [0, 1] are differentiable, monotonic functions satisfying f(0) = 1, f(1) = 0, g(0) = 0, g(1) = 1. Then, by implicit differentiation it follows that the corresponding diffusion is
dXt = f ′(t)
f(t) Xtdt+
√ 2g(t) ( g′(t)− f ′(t)g(t)
f(t)
) dBt . (12)
We provide a proof for Equation 12 in the appendix (A.1). To complete our formulation, let fts = f(t) f(s) and gts = √ g(t)2 − f2tsg(s)2. Then, it follows that for any 0 < s < t ≤ 1 we have that
q(xt | xs) = N ( xt | ftsxs, g2tsI ) , (13)
q(xs | xt,x0) = N ( xs ∣∣∣ 1 g2t0 (fs0g 2 tsx0 + ftsg 2 s0xt), g2s0g 2 ts g2t0 I ) , (14)
We include proofs for (13) and (14) in the appendix (A.2). These equations show that we can perform inference with any ancestral sampling path (i.e., the timesteps can attain continuous values) by formulating the reverse process in terms of the posterior distribution as
pθ(xs | xt) = q ( xs | xt, x̂0 = 1ft0 (xt − gt0 θ(xt, t)) ) , (15)
justifying the compatibility of our main approach with time-continuous DDPMs. We note that this reverse process is also mathematically equivalent to a reverse process based on a time-discrete DDPM derived from a subsequence of the original timesteps as done by Song et al. (2020); Nichol & Dhariwal (2021). For the case of s = 0 in the reverse process, we follow the parametrization of Ho et al. (2020) to obtain discretized log likelihoods and compare our log likelihoods fairly with prior work.
Algorithm 1: Given a matrixL ∼ (T+1)×(T+1) of precomputed L(·, ·) terms, find the likelihoodoptimal schedules for all step budgets. def vectorized_dp_all_budgets(L): T = len(L) - 1 D = np.full(L.shape, -1) C = np.full(L.shape, np.inf) C[0, 0] = 0 for k in range(1, T + 1): bpds = C[k - 1, None] + L C[k] = np.amin(bpds, axis=-1) D[k] = np.argmin(bpds, axis=-1)
return D
Algorithm 2: Fetch the shortest path of K steps from the dynamic programming results implicitly returned by Algorithm 1. def fetch_shortest_path(D, K): optpath = [] t = K for k in reversed(range(K)): optpath.append(t) t = D[k, t]
return optpath
4 LEARNING TO EFFICIENTLY SAMPLE FROM DDPMS
We now introduce our dynamic programming (DP) approach. In general, after training a DDPM, one can use a different inference path than the one used during training. Additionally, one can optimize a loss or reward function with respect to the timesteps themselves after the DDPM is trained. In this paper, we use the ELBO as our loss function, however we note that it is possible to directly optimize the timesteps with other objectives.
4.1 OPTIMIZING THE ELBO
In our work, we choose to optimize ELBO as our objective. We rely on one key property of ELBOs, their decomposability. Before defining decomposability, we formally define a K-step inference path as a finite, monotonically increasing sequence of timesteps 0 = t′0 < t ′ 1 < ... < t ′ K−1 < t ′ K = 1. Now, given a set S ⊆ [0, 1], we define a family of lower bounds L of an “ideal” objective Lideal to be S-decomposable if:
1. There is a bijection from L to the set of all inference paths t with all timesteps in S, and 2. There exists a function L : S × S → [0,∞) such that, for all inference paths t with all
timesteps in S, Lideal ≥ ∑|t|−1 i=1 L(ti, ti−1) + C (C a constant).
We now show that DDPM ELBOs are decomposable. As shown by Song et al. (2020); Nichol & Dhariwal (2021) and the equations in Section 3, for any K and any K-step inference path t, there is a corresponding ELBO
− LELBO = EqDKL ( q(x1|x0)‖pθ(x1) ) + K∑ i=1 L(t′i, t ′ i−1) (16)
where
L(t, s) = { −Eq log pθ(xt|x0) s = 0 EqDKL ( q(xs|xt,x0)‖pθ(xs|xt) ) s > 0
(17)
Since all of these are lower bounds of Eq log p(x0), we conclude the family of DDPM evidence lower bounds is decomposable. Specifically, a DDPM trained on a set of timesteps S admits Sdecomposable ELBOs. For DDPMs trained with continuous timesteps, S = [0, 1]. For DDPMs trained on discrete timesteps, S is the set of those timesteps, as there is no guarantee that the behavior of the model won’t be pathological when give timesteps it has never seen during training. Now the question remains, given a fixed budget K steps, what is the optimal inference path?
First, we observe that any two paths that share a (t, s) transition will share a common L(t, s) term. We exploit this property in our dynamic programming algorithm. When given a grid S of plausible inference paths 0 = t0 < t1 < ... < tT−1 < tT = 1 with T ≥ K, it is possible to efficiently find the ELBO-optimal K-step inference path contained in S by memoizing all the individual L(t, s) ELBO terms for s, t ∈ {t0, ..., tT } with s < t. We can then solve the canonical least-cost-path problem on a directed graph where s→ t are nodes and the edge connecting them has cost L(t, s).
4.2 DYNAMIC PROGRAMMING ALGORITHM
We now outline our methodology to solve the least-cost-path problem. Our solution is similar to Dijkstra’s algorithm, but it differs to the classical least-cost-path problem where the latter is typically used, as our problem has additional constraints: we restrict our search to paths of exactly K + 1 nodes, and the start and end nodes are fixed.
Let C and D be (K + 1)× (T + 1) matrices. C[k, t] will be the total cost of the least-cost-path of length k from t to 0. D will be filled with the timesteps corresponding to such paths; i.e., D[k, t] will be the timestep s immediately previous to t for the optimal k-step path (assuming t is also part of such path).
We initialize C[0, 0] = 0 and all the other C[0, ·] to∞ (the D[0, ·] are irrelevant, but for ease of index notation we keep them in this section). Then, for each k from 1 to K, we iteratively set, for each t,
C[k, t] = min s
(C[k − 1, s] + L(t, s))
D[k, t] = argmin s
(C[k − 1, s] + L(t, s))
where L(t, s) is the cost to transition from t to s (see Equation 17). For all s ≥ t, we set L(t, s) =∞ (e.g., we only move backwards in the diffusion process). This procedure captures the shortest path cost in C and the shortest path itself in D. We further observe that running the DP algorithm for each k from 1 to T (instead of K), we can extract the optimal paths for all possible budgets K. Algorithm 1 illustrates a vectorized version of the procedure we have outlined in this section, while Algorithm 2 shows how to explicitly extract the optimal paths from D.
4.3 EFFICIENT MEMOIZATION
A priori, our dynamic programming approach appears to be inefficient because it requires computing O(T 2) terms (recall, as we rely on all the L(t, s) terms which depend on a neural network forward pass). We however observe that a single forward pass of the DDPM can be used to compute all the L(t, ·) terms. This holds true even in the case where the pre-trained DDPM learns the variances. For example, in Nichol & Dhariwal (2021) instead of fixing them to g̃ts = gtsgs0gt0 as we outlined in the previous section, the forward pass itself still only depends on t and not s, and the variance of pθ(xs|xt) is obtained by interpolating the forward pass’s output logits v with exp(v log g2ts + (1− v) log g̃2ts). Thus, computing the table of all the L(t, s) ELBO terms only requires O(T ) forward passes.
5 EXPERIMENTS
We apply our method on a wide variety of pre-trained DDPMs from prior work. This emphasizes the fact that our method is applicable to any pre-trained DDPM model. In particular, we rely the CIFAR10 model checkpoints released by Nichol & Dhariwal (2021) on both their Lhybrid and Lvlb objectives. We also showcase results on CIFAR10 (Krizhevsky et al., 2009) with the exact configuration used by Ho et al. (2020), which we denote as Lsimple, as well as Lhybrid on ImageNet 64x64 (Deng et al.,
2009) following Nichol & Dhariwal (2021), training these last two models from scratch for 800K and 3M steps, respectively, but otherwise using the exact same configurations as the authors.
In our experiments, we always search over a grid that includes all the timesteps used to train the model, i.e., {t/T : t ∈ {1, ..., T − 1}}. For our CIFAR10 results, we computed the memoization tables with Monte Carlo estimates over the full training dataset, while on ImageNet 64x64 we limited the number of datapoints in the Monte Carlo estimates to 16,384 images on the training dataset.
For each pre-trained model, we compare the negative log likelihoods (estimated using the full heldout dataset) of the strides discovered by our dynamic programming algorithm against even and quadratic strides, following Song et al. (2020). We find that our dynamic programming algorithm discovers strides resulting in much better log likelihoods than the hand-crafted strides used in prior work, particularly in the few-step regime. We provide a visualization of the log likelihood curves as a function of computation budget in Figure 1 for Lsimple CIFAR10 and Lhybrid ImageNet 64x64 (Deng et al., 2009), a full list of the scores in the few-step regime in Table 1, and a visualization of the discovered steps themselves in Figure 2.
5.1 COMPARISON WITH FID
We further evaluate our discovered strides by reporting FID scores (Heusel et al., 2017) on 50,000 model samples against the same number of samples from the training dataset, as is standard in the literature. We find that, although our strides are yield much better log likelihoods, such optimization does not necessarily translate to also improving the FID scores. Results are included in Figure 3. This weakened correlation between log-likehoods and FID is consistent with observations in prior work (Ho et al., 2020; Nichol & Dhariwal, 2021).
To remedy this issue, we show that despite our focus in this work being likelihood, we can significantly improve the FID scores discovered by our method simply by optimizing a reweighted ELBO. Recall that, as discussed in Section 4, our proposed method is compatible with any decomposable objective. Moreover, prior work has shown that the choice of ELBO weights has a significant effect on sample quality (Ho et al., 2020; Durkan & Song, 2021), and that choosing weights corresponds to choosing an equally valid variational lower bound of the data for a DDIM (Song et al., 2020). Similarly to prior work in the VAE literature, we thus stumble upon an open problem where different variational lower bounds compatible with the model (even with the same bits/dim) can lead to samples with different qualitative charachteristics (Alemi et al., 2018). As our focus is likelihood, we leave this research question of finding the weights / ELBO that lead to most agreement with FID for future work, but nevertheless identify one such choice that favourably biases our algorithm toward this front. Details about our construction of weights and results are included in the appendix (A.3).
5.2 MONTE CARLO ABLATION
To investigate the feasibility of our approach using minimal computation, we experimented with setting the number of Monte Carlo datapoints used to compute the dynamic programming table of negative log likelihood terms to 128 samples (i.e., easily fit into a single batch of GPU memory). We find that, for CIFAR10, the difference in log likelihoods is negligible, while on ImageNet 64x64 there is a visible yet slight improvement in negative log likelihood when filling the table with more samples. We hypothesize that this is due to the higher diversity of ImageNet. Nevertheless, we highlight that our procedure can be applied very quickly (i.e., with just T forward passes of a neural network when using a single batch, as opposed to a running average over batches), even for large models, to significantly improve log their likelihoods in the few-step regime.
6 RELATED WORK
DDPMs (Ho et al., 2020) have recently shown results that are competitive with GANs (Goodfellow et al., 2014), and they can be traced back to the work of Sohl-Dickstein et al. (2015) as a restricted family of deep latent variable models. Dhariwal & Nichol (2021) have more recently shown that DDPMs can outperform GANs in FID scores (Heusel et al., 2017). Song & Ermon (2019) have also linked DDPMs to denoising score matching (Vincent et al., 2008; 2010), which is crucial to the continuous-time formulation (Song et al., 2021; Kingma et al., 2021). More recent work on the few-step regime of DDPMs (Song et al., 2020; Chen et al., 2021; Nichol & Dhariwal, 2021; San-Roman et al., 2021; Kong & Ping, 2021; Jolicoeur-Martineau et al., 2021) has also guided our research efforts. DDPMs are also very closely related to variational autoencoders (Kingma & Welling,
2013), where more recent work has shown that, with many stochastic layers, they can also attain competitive negative log likelihoods in unconditional image generation (Child, 2020). Also very closely related to DDPMs, there has also been work on non-autoregressive modeling of text sequences that can be regarded as discrete-space DDPMs with a forward process that masks or remove tokens (Lee et al., 2018; Gu et al., 2019; Stern et al., 2019; Chan et al., 2020; Saharia et al., 2020). The UNet architecture (Ronneberger et al., 2015) has been key to the recent success of DDPMs, and as shown by Ho et al. (2020); Nichol & Dhariwal (2021), augmenting UNet with self-attention (Shaw et al., 2018) in scales where attention is computationally feasible has helped bring DDPMs closer to the current state-of-the-art autoregressive generative models (Child et al., 2019; Jun et al., 2020; Roy et al., 2021).
7 CONCLUSION AND DISCUSSION
By regarding the selection of the inference schedule as an optimization problem, we present a novel and efficient approach to discover a likelihood-optimal inference schedule for a pre-trained DDPM with a simple dynamic programming algorithm. Our method need only be applied once to discover the schedule, and does not require re-training the DPPM. In the few-step regime, we discover schedules on Lsimple CIFAR10 and Lhybrid ImageNet 64x64 that require only 32 steps, yet sacrifice≤ 0.1 bits per dimension compared to state-of-the-art DDPMs using hundreds-to-thousands of refinement steps. Our approach only needs forward passes of the DDPM neural network to fill the dynamic programming table of L(t, s) terms, and we show that we can fill the dynamic programming table with just O(T ) forward passes. Moreover, we can estimate the table using only 128 Monte Carlo samples, finding this to be sufficient even for datasets such as ImageNet with high diversity. Our method achieves strong likelihoods with very few refinement steps, outperforming prior work utilizing hand-crafted strides (Ho et al., 2020; Nichol & Dhariwal, 2021).
Despite very strong log-likelihood results, we observe that maximizing the unweighted ELBO can actually lead to higher (worse) FID scores, and on ImageNet 64x64, a decrease in sample quality for the smallest budgets K ∈ {32, 64}; this is consistent with findings in prior work (Ho et al., 2020; Nichol & Dhariwal, 2021). Nevertheless, our method is compatible with any decomposable objective such as reweighted variational lower bounds, and we show that a simple choice of reweighted ELBO (or equivalently a choice of DDIM) can remedy this issue. Developing principled methods to choose variational lower bounds or other decomposable metrics that correlate best with image quality is thus an important direction for future research. Finally, we remark that likelihood optimization itself is useful for specific applications: as well as better compression, in domains such as non-autoregressive text generation where likelihood correlates much better with sample quality and where diffusion models are starting to make progress (Austin et al., 2021), our method has the potential to improve sampling speed with far less cost in generation fidelity and without such adaptations.
REPRODUCIBILITY STATEMENT
We will fully open source our work and provide code pointers in the paper in the camera-ready version. Nevertheless, we provide pseudocode with a complete implementation of our proposed algorithm to maximize ease of reproducibility while we work on open-sourcing our work (see Algorithms ?? and ??). Since we experiment with open-sourced datasets and pre-trained models that already have publicly available checkpoints, our work is fully reproducible. We additionally emphasize that our method has no hyperparameters of its own.
ETHICS STATEMENT
Innovations in generative models have the potential to enable harmful and unethical applications. In applications where no harm is intended, bias and other failure modes of generative models and datasets used to train them can also lead to issues with fairness, discrimination, and other forms of harm. While our work is focused on making diffusion models more efficient, we believe its public release will not cause any form of immediate harm, as much more efficient generative models for images like GANs can still achieve high sample quality at a fraction of the speed of diffusion models.
A APPENDIX
A.1 PROOF FOR EQUATION 12
From Equation 10, we get by implicit differentiation that
f(t) = ψ(t, 0) = exp (∫ t 0 fsde(u)du ) ⇒f ′(t) = exp
(∫ t 0 fsde(u)du ) d dt ∫ t 0 fsde(u)du = f(t)fsde(t)
⇒fsde(t) = f ′(t)
f(t)
Similarly as above and also using the fact that ψ(t, s) = ψ(t,0)ψ(s,0) ,
g(t)2 = ∫ t 0 ψ(t, u)2gsde(u) 2du = ∫ t 0 f(t)2 f(u)2 gsde(u) 2du = f(t)2 ∫ t 0 gsde(u) 2 f(u)2 du
⇒2g(t)g′(t) = 2f(t)f ′(t) g(t) 2
f(t)2 + f(t)2
d
dt ∫ t 0 gsde(u) 2 f(u)2 du = 2fsde(t)g(t) 2 + gsde(t) 2
⇒gsde(t) = √ 2(g(t)g′(t)− fsde(t)g(t)2).
A.2 PROOF FOR EQUATIONS 13 AND 14
From Equation 10 and ψ(t, s) = ψ(t,0)ψ(s,0) it is immediate that fts is the mean of q(xt|xs). To show that g2ts is the variance of q(xt|xs), Equation 10 implies that
Var[xt|xs] = ∫ t s ψ(t, u)2gsde(u) 2du
= ∫ t 0 ψ(t, u)2gsde(u) 2du− ∫ s 0 ψ(t, u)2gsde(u) 2du
= g(t)2 − ψ(t, 0)2 ∫ s 0
ψ(s, u)2
ψ(s, u)2ψ(u, 0)2 gsde(u)
2du
= g(t)2 − ψ(t, 0)2 ∫ s 0 ψ(s, u)2 ψ(s, 0)2 gsde(u) 2du = g(t)2 − ψ(t, s)2g(s)2
= g(t)2 − ftsg(s)2.
The mean of q(xs|xt, x0) is given by the Gaussian conjugate prior formula (where all the distributions are conditioned on x0). Let µ = ftsxs, so we have a prior over µ given by
xs|x0 ∼ N (fs0x0, g2s0Id)⇒ µ|x0 ∼ N (fs0ftsx0, f2tsg2s0Id) ∼ N (ft0x0, f2tsg2s0Id),
and a likelihood with mean µ
xt|xs, x0 ∼ xt|xs ∼ N (ftsxs, g2tsId)⇒ xt|µ, x0 ∼ xt|µ ∼ N (µ, g2tsId).
Then it follows by the formula that µ|xt, x0 has variance
Var[µ|xt, x0] = ( 1
f2tsg 2 s0
+ 1
g2ts
)−1 = ( g2ts + f 2 tsg 2 s0
f2tsg 2 s0g 2 ts
)−1 = f2tsg 2 s0g 2 ts
g2ts + f 2 tsg 2 s0
⇒Var[xs|xt, x0] = 1
f2ts Var[µ|xt, x0] =
g2s0g 2 ts
g2ts + f 2 tsg 2 s0
= g2s0g 2 ts
g2t0 = g̃2ts
and mean E[µ|xt, x0] = ( 1
f2tsg 2 s0
+ 1
g2ts )−1( ft0x0 f2tsg 2 s0 + xt g2ts ) = ft0g 2 tsx0 + f 2 tsg 2 s0xt g2ts + f 2 tsg 2 s0 = ft0g 2 tsx0 + f 2 tsg 2 s0xt g2t0
⇒E[xs|xt, x0] = 1
fts E[µ|xt, x0] =
ft0 fts g2tsx0 + ftsg 2 s0xt
g2t0 = fs0g
2 tsx0 + ftsg 2 s0xt
g2t0 = f̃ts(xt, x0).
A.3 REWEIGHTED ELBO RESULTS
While the focus of our work is likelihood, we report FID scores for the sake of completeness, as well as to show the adaptability of our method via reweighting to focus on sample quality, as mentioned in Section 5.1.
To choose a reweighting scheme for each L(t, s) term that takes into account both t and s, Kingma et al. (2021) show that choosing discretized terms
Lw(t, s) = [ − ∫ t s w(u)SNR′(u)du ] ‖x0 − x̂0(xt, t)‖2 (18)
ensures that at the limit of infinite timesteps the continuous ELBO becomes − 12Eq ∫ 1 0 w(t)SNR′(t)‖x0 − x̂0(xt, t)‖2dt (where SNR(t) = f 2 t0 g2t0 , and a constant w(t) = 1 yields the unweighted ELBO).
While choosing w(t) = − SNR(t)SNR′(t) (which is Lsimple at the limit) did not work, we find that choosing w(t) = − 1SNR′(t) (which leads to a continuous objective of unweighted mean square errors and Lw(t, s) = (t− s)‖x0 − x̂0(xt, t)‖2) allows our algorithm to outperform DDPM FID scores and achieve similar scores to DDIM (Song et al., 2020). We call this “MSE reweighting” and include FID scores in Table 5, comparing to DDPM but also DDIM(η = 0) which does not admit likelihood computation but has been shown to be one of the strongest FID baselines in the few-step regime. The FID scores were estimated with 50,000 model and training data samples, as is standard in the literature.
A.4 NOTE ON SAMPLING STRATEGIES FOR FAIR COMPARISON
Finally, we discuss an alternative approach to that used in the figures of the paper for producing more qualitatively comparable samples. Given a fixed budget K, Figures 4 and 5 are produced to generate “comparable” samples across different strides by fixing all the standard Gaussian vectors in the sampling chain. However, another approach that allows to compare samples across different budgets is to fix a single Brownian motion trajectory on all T steps, and using discretizations (based on any stride) of this single random trajectory to generate the samples. We empirically find, however, that the former approach tends to produce much more similar images (see Figures 7 and 8 below). We suspect that the use of different strides (and hence different random directions from the fixed random trajectory) along with the very chaotic, non-linear behavior of DDPMs is the cause of this behavior. | 1. What is the focus and contribution of the paper on dynamic programming for sampling from diffusion models?
2. What are the strengths of the proposed approach, particularly in its ability to handle coarse discretizations?
3. What are the weaknesses of the paper regarding its explanations and clarity?
4. Do you have any concerns about the limitations of the approach, such as scalability and applicability to certain types of regularization methods?
5. How does the reviewer assess the efficiency of the proposed method compared to other approaches, and what are the potential gains of using this method? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a dynamic programming algorithm to sample from diffusion models. In short, they solve a Dijkstra-type problem on pretrained diffusions, and show good results even with coarse discretizations.
Review
Some things are introduced but never clearly explained. The most important one is you never explicitly say what decomposable means. I think most readers can figure it out, but only after reading the paper. You write a bit on page 2, but I would make it even more clear, as it is important for reading the rest. Also: what is an ELBO path?
On page 4, you write: "we can optimize a loss or reward function with respect to the timesteps themselves (after the DDPM is trained)." Can you explain again what this means?
Condition 1 on page 4: "The path starts at t = 0 and ends at t = 1." Is it not possible to both scale and translate the timescale? How restrictive is this really?
What is it about some regularization methods that makes your approach not work? Breaking the decomposability? Can the authors think of other regularization methods that break the approach?
My key concern: In actual compute time (say, seconds) how long does it take DP stride with 128 steps take compared to 128 Quadratic stride? To me it looks like 128 is enough for quadratic stride to catch up to your method, so how much is there to win by choosing your algorithm? |
ICLR | Title
Learning to Efficiently Sample from Diffusion Probabilistic Models
Abstract
Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a powerful family of generative models that, yielding high-fidelity samples and competitive log-likelihoods across a range of domains, including image and speech synthesis. Key advantages of DDPMs include ease of training, in contrast to generative adversarial networks, and speed of generation, in contrast to autoregressive models. However, DDPMs typically require hundreds-to-thousands of steps to generate a high fidelity sample, making them prohibitively expensive for high dimensional problems. Fortunately, DDPMs allow trading generation speed for sample quality through adjusting the number of refinement steps during inference. Prior work has been successful in improving generation speed through handcrafting the time schedule through trial and error. We instead view the selection of the inference time schedules as an optimization problem, and show that, with a simple dynamic programming algorithm, one can find the log-likelihood-optimal discrete time schedules for any pre-trained DDPM. Our method exploits the fact that the evidence lower bound (ELBO) can be decomposed into separate KL divergence terms, and given any computation budget, we discover the time schedule that maximizes the training ELBO exactly. Our method is efficient, has no hyper-parameters of its own, and can be applied to any pre-trained DDPM with no retraining. We discover inference time schedules requiring as few as 32 refinement steps, while sacrificing less than 0.1 bits per dimension compared to the default 4,000 steps used on an ImageNet 64x64 model.
1 INTRODUCTION
Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a powerful class of generative models (Sohl-Dickstein et al., 2015; Ho et al., 2020). DDPMs model the data distribution through an iterative denoising process, and have been applied successfully to a variety of applications, including unconditional image generation (Song & Ermon, 2019; Ho et al., 2020; Song et al., 2021; Nichol & Dhariwal, 2021), shape generation (Cai et al., 2020), text-to-speech (Chen et al., 2021; Kong et al., 2020) and single image super-resolution (Saharia et al., 2021; Li et al., 2021).
DDPMs are easy to train, featuring a simple denoising objective (Ho et al., 2020) with noise schedules that successfully transfer across different models and datasets. This contrasts to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which require an inner-outer loop optimization procedure that often entails instability and requires careful hyperparameter tuning. DDPMs also admit a simple non-autoregressive inference process; this contrasts to autoregressive models with often prohibitive computational costs on high dimensional data. The DDPM inference process starts with samples from the corresponding prior noise distribution (e.g., standard Gaussian), and iteratively denoises the samples under the fixed noise schedule. However, DDPMs often need hundreds-tothousands of denoising steps (each involving a feedforward pass of a large neural network) to achieve strong results. While this process is still much faster than autoregressive models, this is still often computationally prohibitive, especially when modeling high dimensional data.
There has been much recent work focused on improving the sampling speed of DDPMs. WaveGrad (Chen et al., 2021) introduced a manually crafted schedule requiring only 6 refinement steps; however, this schedule seems to be only applicable to the vocoding task where there is a very strong conditioning signal. Denoising Diffusion Implicit Models (DDIMs) (Song et al., 2020) accelerate sampling from
pre-trained DDPMs by relying on a family of non-Markovian processes. They accelerate the generative process through taking multiple steps in the diffusion process. However, DDIMs sacrifice the ability to compute log-likelihoods. Nichol & Dhariwal (2021) also explored the use of ancestral sampling with a subsequence of the original denoising steps, trying both a uniform stride and other hand-crafted strides. San-Roman et al. (2021) improve few-step sampling further by training a separate model after training a DDPM to estimate the level of noise, and modifying inference to dynamically adjust the noise schedule at every step to match the predicted noise level.
All these fast-sampling techniques rely on a key property of DDPMs – there is a decoupling between the training and inference schedule. The training schedule need not be the same as the inference schedule, e.g., a diffusion model trained to use 1000 steps may actually use only 10 steps during inference. This decoupling characteristic is typically not found in other generative models. In past work, the choice of inference schedule was often considered a hyperpameter selection problem, and often selected via intuition or extensive hyperparmeter exploration (Chen et al., 2021). In this work, we view the choice of the timesteps of the inference schedule (which we just call an inference path) as an independent optimization problem, wherein we attempt to learn the best schedule. Our approach relies on the observation that we can solve this optimization problem with dynamic programming. Given a fixed budget of K refinement steps and a pre-trained DDPM, we find the set of timesteps that maximizes the corresponding evidence lower bound (ELBO). As an optimization objective, the ELBO has a key decomposability property: the total ELBO is the sum of individual KL terms, and for any two inference paths, if the timesteps (s, t) contiguously occur in both, they share a common KL term, therefore admitting memoization (see Section 4.1 for a precise definition).
Our main contributions are the following: • We introduce a method that that finds the likelihood-optimal inference paths with a simple
dynamic programming algorithm for all possible computation budgets of K refinement steps. The algorithm searches over T > K timesteps, only requiring O(T ) neural network forward passes. It only needs to be applied once to a pre-trained DDPM, does not require training or retraining a DDPM, and is applicable to both time-discrete and time-continuous DDPMs.
• We experiment with DDPM models from prior work. On both Lsimple CIFAR10 and Lhybrid ImageNet 64x64, we discover schedules which require only 32 refinement steps, yet sacrifice only 0.1 bits per dimension compared to their original counterparts with 1,000 and 4,000 steps, respectively.
• We show that our method can be applied to any decomposable set of objectives. In particular, optimizing a reweighted ELBO can favourably bias our algorithm towards solutions with better FID scores, as we find that optimizing the exact variational lower bound may lead to worse FID scores, which is consistent with prior work on unconditional image generation.
2 BACKGROUND ON DENOISING DIFFUSION PROBABILISTIC MODELS
Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020; Sohl-Dickstein et al., 2015) are defined in terms of a forward Markovian diffusion process q and a learned reverse process pθ. The forward diffusion process gradually adds Gaussian noise to a data point x0 through T iterations,
q(x1:T | x0) = ∏T
t=1 q(xt | xt−1) , (1)
q(xt | xt−1) = N (xt | √ αt xt−1, (1− αt)I) , (2)
where the scalar parameters α1:T determine the variance of the noise added at each diffusion step, subject to 0 < αt < 1. The learned reverse process aims to model q(x0) by inverting the forward process, gradually removing noise from signal starting from pure Gaussian noise xT ,
p(xT ) = N (xT | 0, I) (3) pθ(x0:T ) = p(xT ) ∏T
t=1 pθ(xt−1 | xt) (4)
pθ(xt−1 | xt) = N (xt−1 | µθ(xt, t), σ2t I) . (5) The parameters of the reverse process can be optimized by maximizing the following variational lower bound on the training set:
Eq log p(x0) ≥ Eq [ log pθ(x0|x1)−
T∑ t=2 DKL ( q(xt−1|xt,x0)‖pθ(xt−1|xt) ) − LT (x0)
] (6)
where LT (x0) = DKL ( q(xT |x0) ‖ p(xT ) ) . Nichol & Dhariwal (2021) have demonstrated that training DDPMs by maximizing the ELBO yields competitive log-likelihood scores on both CIFAR10 and ImageNet 64×64 achieving 2.94 and 3.53 bits per dimension respectively. Two notable properties of Gaussian diffusion process that help formulate DDPMs tractably and efficiently include:
q(xt | x0) = N (xt | √ γt x0, (1− γt)I) , where γt = ∏t i=1 αi , (7)
q(xt−1 | x0,xt) = N ( xt−1 ∣∣∣ √γt−1 (1− αt)x0 +√αt (1− γt−1)xt 1− γt , (1− γt−1)(1− αt) 1− γt I ) .(8)
Given the marginal distribution of xt given x0 in (7), one can sample from the q(xt | x0) independently for different t and perform SGD on a randomly chosen KL term in (6). Furthermore, given that the posterior distribution of xt−1 given xt and x0 is Gaussian, one can compute each KL term in (6) between two Gaussians in closed form and avoid high variance Monte Carlo estimation.
3 LINKING DDPMS TO CONTINUOUS TIME AFFINE DIFFUSION PROCESSES
Before describing our approach to efficiently sampling from DDPMs, it is helpful to link DDPMs to continuous time affine diffusion processes, as it shows the compatibility of our approach to both time-discrete and time-continuous DDPMs (Song et al., 2021; Kingma et al., 2021). Let x0 ∼ q(x0) denote a data point drawn from the empirical distribution of interest and let q(xt|x0) denote a stochastic process for t ∈ [0, 1] defined through an affine diffusion process through the following stochastic differential equation (SDE):
dXt = fsde(t)Xtdt+ gsde(t)dBt , (9) where fsde, gsde : [0, 1]→ [0, 1] are integrable functions satisfying fsde(0) = 1 and gsde(0) = 0. Following Särkkä & Solin (2019) (section 6.1), we can compute the exact marginals q(xt|xs) for any 0 ≤ s < t ≤ 1. We get:
q(xt | xs) = N ( xt ∣∣∣ψ(t, s)xs,(∫ t s ψ(t, u)2g(u)2du ) I ) (10)
where ψ(t, s) = exp ∫ t s f(u)du. Since these integrals are difficult to work with, we instead propose (in parallel to Kingma et al. (2021)) to define the marginals directly: q(xt | x0) = N (xt | f(t)x0, g(t)2I) (11)
where f, g : [0, 1] → [0, 1] are differentiable, monotonic functions satisfying f(0) = 1, f(1) = 0, g(0) = 0, g(1) = 1. Then, by implicit differentiation it follows that the corresponding diffusion is
dXt = f ′(t)
f(t) Xtdt+
√ 2g(t) ( g′(t)− f ′(t)g(t)
f(t)
) dBt . (12)
We provide a proof for Equation 12 in the appendix (A.1). To complete our formulation, let fts = f(t) f(s) and gts = √ g(t)2 − f2tsg(s)2. Then, it follows that for any 0 < s < t ≤ 1 we have that
q(xt | xs) = N ( xt | ftsxs, g2tsI ) , (13)
q(xs | xt,x0) = N ( xs ∣∣∣ 1 g2t0 (fs0g 2 tsx0 + ftsg 2 s0xt), g2s0g 2 ts g2t0 I ) , (14)
We include proofs for (13) and (14) in the appendix (A.2). These equations show that we can perform inference with any ancestral sampling path (i.e., the timesteps can attain continuous values) by formulating the reverse process in terms of the posterior distribution as
pθ(xs | xt) = q ( xs | xt, x̂0 = 1ft0 (xt − gt0 θ(xt, t)) ) , (15)
justifying the compatibility of our main approach with time-continuous DDPMs. We note that this reverse process is also mathematically equivalent to a reverse process based on a time-discrete DDPM derived from a subsequence of the original timesteps as done by Song et al. (2020); Nichol & Dhariwal (2021). For the case of s = 0 in the reverse process, we follow the parametrization of Ho et al. (2020) to obtain discretized log likelihoods and compare our log likelihoods fairly with prior work.
Algorithm 1: Given a matrixL ∼ (T+1)×(T+1) of precomputed L(·, ·) terms, find the likelihoodoptimal schedules for all step budgets. def vectorized_dp_all_budgets(L): T = len(L) - 1 D = np.full(L.shape, -1) C = np.full(L.shape, np.inf) C[0, 0] = 0 for k in range(1, T + 1): bpds = C[k - 1, None] + L C[k] = np.amin(bpds, axis=-1) D[k] = np.argmin(bpds, axis=-1)
return D
Algorithm 2: Fetch the shortest path of K steps from the dynamic programming results implicitly returned by Algorithm 1. def fetch_shortest_path(D, K): optpath = [] t = K for k in reversed(range(K)): optpath.append(t) t = D[k, t]
return optpath
4 LEARNING TO EFFICIENTLY SAMPLE FROM DDPMS
We now introduce our dynamic programming (DP) approach. In general, after training a DDPM, one can use a different inference path than the one used during training. Additionally, one can optimize a loss or reward function with respect to the timesteps themselves after the DDPM is trained. In this paper, we use the ELBO as our loss function, however we note that it is possible to directly optimize the timesteps with other objectives.
4.1 OPTIMIZING THE ELBO
In our work, we choose to optimize ELBO as our objective. We rely on one key property of ELBOs, their decomposability. Before defining decomposability, we formally define a K-step inference path as a finite, monotonically increasing sequence of timesteps 0 = t′0 < t ′ 1 < ... < t ′ K−1 < t ′ K = 1. Now, given a set S ⊆ [0, 1], we define a family of lower bounds L of an “ideal” objective Lideal to be S-decomposable if:
1. There is a bijection from L to the set of all inference paths t with all timesteps in S, and 2. There exists a function L : S × S → [0,∞) such that, for all inference paths t with all
timesteps in S, Lideal ≥ ∑|t|−1 i=1 L(ti, ti−1) + C (C a constant).
We now show that DDPM ELBOs are decomposable. As shown by Song et al. (2020); Nichol & Dhariwal (2021) and the equations in Section 3, for any K and any K-step inference path t, there is a corresponding ELBO
− LELBO = EqDKL ( q(x1|x0)‖pθ(x1) ) + K∑ i=1 L(t′i, t ′ i−1) (16)
where
L(t, s) = { −Eq log pθ(xt|x0) s = 0 EqDKL ( q(xs|xt,x0)‖pθ(xs|xt) ) s > 0
(17)
Since all of these are lower bounds of Eq log p(x0), we conclude the family of DDPM evidence lower bounds is decomposable. Specifically, a DDPM trained on a set of timesteps S admits Sdecomposable ELBOs. For DDPMs trained with continuous timesteps, S = [0, 1]. For DDPMs trained on discrete timesteps, S is the set of those timesteps, as there is no guarantee that the behavior of the model won’t be pathological when give timesteps it has never seen during training. Now the question remains, given a fixed budget K steps, what is the optimal inference path?
First, we observe that any two paths that share a (t, s) transition will share a common L(t, s) term. We exploit this property in our dynamic programming algorithm. When given a grid S of plausible inference paths 0 = t0 < t1 < ... < tT−1 < tT = 1 with T ≥ K, it is possible to efficiently find the ELBO-optimal K-step inference path contained in S by memoizing all the individual L(t, s) ELBO terms for s, t ∈ {t0, ..., tT } with s < t. We can then solve the canonical least-cost-path problem on a directed graph where s→ t are nodes and the edge connecting them has cost L(t, s).
4.2 DYNAMIC PROGRAMMING ALGORITHM
We now outline our methodology to solve the least-cost-path problem. Our solution is similar to Dijkstra’s algorithm, but it differs to the classical least-cost-path problem where the latter is typically used, as our problem has additional constraints: we restrict our search to paths of exactly K + 1 nodes, and the start and end nodes are fixed.
Let C and D be (K + 1)× (T + 1) matrices. C[k, t] will be the total cost of the least-cost-path of length k from t to 0. D will be filled with the timesteps corresponding to such paths; i.e., D[k, t] will be the timestep s immediately previous to t for the optimal k-step path (assuming t is also part of such path).
We initialize C[0, 0] = 0 and all the other C[0, ·] to∞ (the D[0, ·] are irrelevant, but for ease of index notation we keep them in this section). Then, for each k from 1 to K, we iteratively set, for each t,
C[k, t] = min s
(C[k − 1, s] + L(t, s))
D[k, t] = argmin s
(C[k − 1, s] + L(t, s))
where L(t, s) is the cost to transition from t to s (see Equation 17). For all s ≥ t, we set L(t, s) =∞ (e.g., we only move backwards in the diffusion process). This procedure captures the shortest path cost in C and the shortest path itself in D. We further observe that running the DP algorithm for each k from 1 to T (instead of K), we can extract the optimal paths for all possible budgets K. Algorithm 1 illustrates a vectorized version of the procedure we have outlined in this section, while Algorithm 2 shows how to explicitly extract the optimal paths from D.
4.3 EFFICIENT MEMOIZATION
A priori, our dynamic programming approach appears to be inefficient because it requires computing O(T 2) terms (recall, as we rely on all the L(t, s) terms which depend on a neural network forward pass). We however observe that a single forward pass of the DDPM can be used to compute all the L(t, ·) terms. This holds true even in the case where the pre-trained DDPM learns the variances. For example, in Nichol & Dhariwal (2021) instead of fixing them to g̃ts = gtsgs0gt0 as we outlined in the previous section, the forward pass itself still only depends on t and not s, and the variance of pθ(xs|xt) is obtained by interpolating the forward pass’s output logits v with exp(v log g2ts + (1− v) log g̃2ts). Thus, computing the table of all the L(t, s) ELBO terms only requires O(T ) forward passes.
5 EXPERIMENTS
We apply our method on a wide variety of pre-trained DDPMs from prior work. This emphasizes the fact that our method is applicable to any pre-trained DDPM model. In particular, we rely the CIFAR10 model checkpoints released by Nichol & Dhariwal (2021) on both their Lhybrid and Lvlb objectives. We also showcase results on CIFAR10 (Krizhevsky et al., 2009) with the exact configuration used by Ho et al. (2020), which we denote as Lsimple, as well as Lhybrid on ImageNet 64x64 (Deng et al.,
2009) following Nichol & Dhariwal (2021), training these last two models from scratch for 800K and 3M steps, respectively, but otherwise using the exact same configurations as the authors.
In our experiments, we always search over a grid that includes all the timesteps used to train the model, i.e., {t/T : t ∈ {1, ..., T − 1}}. For our CIFAR10 results, we computed the memoization tables with Monte Carlo estimates over the full training dataset, while on ImageNet 64x64 we limited the number of datapoints in the Monte Carlo estimates to 16,384 images on the training dataset.
For each pre-trained model, we compare the negative log likelihoods (estimated using the full heldout dataset) of the strides discovered by our dynamic programming algorithm against even and quadratic strides, following Song et al. (2020). We find that our dynamic programming algorithm discovers strides resulting in much better log likelihoods than the hand-crafted strides used in prior work, particularly in the few-step regime. We provide a visualization of the log likelihood curves as a function of computation budget in Figure 1 for Lsimple CIFAR10 and Lhybrid ImageNet 64x64 (Deng et al., 2009), a full list of the scores in the few-step regime in Table 1, and a visualization of the discovered steps themselves in Figure 2.
5.1 COMPARISON WITH FID
We further evaluate our discovered strides by reporting FID scores (Heusel et al., 2017) on 50,000 model samples against the same number of samples from the training dataset, as is standard in the literature. We find that, although our strides are yield much better log likelihoods, such optimization does not necessarily translate to also improving the FID scores. Results are included in Figure 3. This weakened correlation between log-likehoods and FID is consistent with observations in prior work (Ho et al., 2020; Nichol & Dhariwal, 2021).
To remedy this issue, we show that despite our focus in this work being likelihood, we can significantly improve the FID scores discovered by our method simply by optimizing a reweighted ELBO. Recall that, as discussed in Section 4, our proposed method is compatible with any decomposable objective. Moreover, prior work has shown that the choice of ELBO weights has a significant effect on sample quality (Ho et al., 2020; Durkan & Song, 2021), and that choosing weights corresponds to choosing an equally valid variational lower bound of the data for a DDIM (Song et al., 2020). Similarly to prior work in the VAE literature, we thus stumble upon an open problem where different variational lower bounds compatible with the model (even with the same bits/dim) can lead to samples with different qualitative charachteristics (Alemi et al., 2018). As our focus is likelihood, we leave this research question of finding the weights / ELBO that lead to most agreement with FID for future work, but nevertheless identify one such choice that favourably biases our algorithm toward this front. Details about our construction of weights and results are included in the appendix (A.3).
5.2 MONTE CARLO ABLATION
To investigate the feasibility of our approach using minimal computation, we experimented with setting the number of Monte Carlo datapoints used to compute the dynamic programming table of negative log likelihood terms to 128 samples (i.e., easily fit into a single batch of GPU memory). We find that, for CIFAR10, the difference in log likelihoods is negligible, while on ImageNet 64x64 there is a visible yet slight improvement in negative log likelihood when filling the table with more samples. We hypothesize that this is due to the higher diversity of ImageNet. Nevertheless, we highlight that our procedure can be applied very quickly (i.e., with just T forward passes of a neural network when using a single batch, as opposed to a running average over batches), even for large models, to significantly improve log their likelihoods in the few-step regime.
6 RELATED WORK
DDPMs (Ho et al., 2020) have recently shown results that are competitive with GANs (Goodfellow et al., 2014), and they can be traced back to the work of Sohl-Dickstein et al. (2015) as a restricted family of deep latent variable models. Dhariwal & Nichol (2021) have more recently shown that DDPMs can outperform GANs in FID scores (Heusel et al., 2017). Song & Ermon (2019) have also linked DDPMs to denoising score matching (Vincent et al., 2008; 2010), which is crucial to the continuous-time formulation (Song et al., 2021; Kingma et al., 2021). More recent work on the few-step regime of DDPMs (Song et al., 2020; Chen et al., 2021; Nichol & Dhariwal, 2021; San-Roman et al., 2021; Kong & Ping, 2021; Jolicoeur-Martineau et al., 2021) has also guided our research efforts. DDPMs are also very closely related to variational autoencoders (Kingma & Welling,
2013), where more recent work has shown that, with many stochastic layers, they can also attain competitive negative log likelihoods in unconditional image generation (Child, 2020). Also very closely related to DDPMs, there has also been work on non-autoregressive modeling of text sequences that can be regarded as discrete-space DDPMs with a forward process that masks or remove tokens (Lee et al., 2018; Gu et al., 2019; Stern et al., 2019; Chan et al., 2020; Saharia et al., 2020). The UNet architecture (Ronneberger et al., 2015) has been key to the recent success of DDPMs, and as shown by Ho et al. (2020); Nichol & Dhariwal (2021), augmenting UNet with self-attention (Shaw et al., 2018) in scales where attention is computationally feasible has helped bring DDPMs closer to the current state-of-the-art autoregressive generative models (Child et al., 2019; Jun et al., 2020; Roy et al., 2021).
7 CONCLUSION AND DISCUSSION
By regarding the selection of the inference schedule as an optimization problem, we present a novel and efficient approach to discover a likelihood-optimal inference schedule for a pre-trained DDPM with a simple dynamic programming algorithm. Our method need only be applied once to discover the schedule, and does not require re-training the DPPM. In the few-step regime, we discover schedules on Lsimple CIFAR10 and Lhybrid ImageNet 64x64 that require only 32 steps, yet sacrifice≤ 0.1 bits per dimension compared to state-of-the-art DDPMs using hundreds-to-thousands of refinement steps. Our approach only needs forward passes of the DDPM neural network to fill the dynamic programming table of L(t, s) terms, and we show that we can fill the dynamic programming table with just O(T ) forward passes. Moreover, we can estimate the table using only 128 Monte Carlo samples, finding this to be sufficient even for datasets such as ImageNet with high diversity. Our method achieves strong likelihoods with very few refinement steps, outperforming prior work utilizing hand-crafted strides (Ho et al., 2020; Nichol & Dhariwal, 2021).
Despite very strong log-likelihood results, we observe that maximizing the unweighted ELBO can actually lead to higher (worse) FID scores, and on ImageNet 64x64, a decrease in sample quality for the smallest budgets K ∈ {32, 64}; this is consistent with findings in prior work (Ho et al., 2020; Nichol & Dhariwal, 2021). Nevertheless, our method is compatible with any decomposable objective such as reweighted variational lower bounds, and we show that a simple choice of reweighted ELBO (or equivalently a choice of DDIM) can remedy this issue. Developing principled methods to choose variational lower bounds or other decomposable metrics that correlate best with image quality is thus an important direction for future research. Finally, we remark that likelihood optimization itself is useful for specific applications: as well as better compression, in domains such as non-autoregressive text generation where likelihood correlates much better with sample quality and where diffusion models are starting to make progress (Austin et al., 2021), our method has the potential to improve sampling speed with far less cost in generation fidelity and without such adaptations.
REPRODUCIBILITY STATEMENT
We will fully open source our work and provide code pointers in the paper in the camera-ready version. Nevertheless, we provide pseudocode with a complete implementation of our proposed algorithm to maximize ease of reproducibility while we work on open-sourcing our work (see Algorithms ?? and ??). Since we experiment with open-sourced datasets and pre-trained models that already have publicly available checkpoints, our work is fully reproducible. We additionally emphasize that our method has no hyperparameters of its own.
ETHICS STATEMENT
Innovations in generative models have the potential to enable harmful and unethical applications. In applications where no harm is intended, bias and other failure modes of generative models and datasets used to train them can also lead to issues with fairness, discrimination, and other forms of harm. While our work is focused on making diffusion models more efficient, we believe its public release will not cause any form of immediate harm, as much more efficient generative models for images like GANs can still achieve high sample quality at a fraction of the speed of diffusion models.
A APPENDIX
A.1 PROOF FOR EQUATION 12
From Equation 10, we get by implicit differentiation that
f(t) = ψ(t, 0) = exp (∫ t 0 fsde(u)du ) ⇒f ′(t) = exp
(∫ t 0 fsde(u)du ) d dt ∫ t 0 fsde(u)du = f(t)fsde(t)
⇒fsde(t) = f ′(t)
f(t)
Similarly as above and also using the fact that ψ(t, s) = ψ(t,0)ψ(s,0) ,
g(t)2 = ∫ t 0 ψ(t, u)2gsde(u) 2du = ∫ t 0 f(t)2 f(u)2 gsde(u) 2du = f(t)2 ∫ t 0 gsde(u) 2 f(u)2 du
⇒2g(t)g′(t) = 2f(t)f ′(t) g(t) 2
f(t)2 + f(t)2
d
dt ∫ t 0 gsde(u) 2 f(u)2 du = 2fsde(t)g(t) 2 + gsde(t) 2
⇒gsde(t) = √ 2(g(t)g′(t)− fsde(t)g(t)2).
A.2 PROOF FOR EQUATIONS 13 AND 14
From Equation 10 and ψ(t, s) = ψ(t,0)ψ(s,0) it is immediate that fts is the mean of q(xt|xs). To show that g2ts is the variance of q(xt|xs), Equation 10 implies that
Var[xt|xs] = ∫ t s ψ(t, u)2gsde(u) 2du
= ∫ t 0 ψ(t, u)2gsde(u) 2du− ∫ s 0 ψ(t, u)2gsde(u) 2du
= g(t)2 − ψ(t, 0)2 ∫ s 0
ψ(s, u)2
ψ(s, u)2ψ(u, 0)2 gsde(u)
2du
= g(t)2 − ψ(t, 0)2 ∫ s 0 ψ(s, u)2 ψ(s, 0)2 gsde(u) 2du = g(t)2 − ψ(t, s)2g(s)2
= g(t)2 − ftsg(s)2.
The mean of q(xs|xt, x0) is given by the Gaussian conjugate prior formula (where all the distributions are conditioned on x0). Let µ = ftsxs, so we have a prior over µ given by
xs|x0 ∼ N (fs0x0, g2s0Id)⇒ µ|x0 ∼ N (fs0ftsx0, f2tsg2s0Id) ∼ N (ft0x0, f2tsg2s0Id),
and a likelihood with mean µ
xt|xs, x0 ∼ xt|xs ∼ N (ftsxs, g2tsId)⇒ xt|µ, x0 ∼ xt|µ ∼ N (µ, g2tsId).
Then it follows by the formula that µ|xt, x0 has variance
Var[µ|xt, x0] = ( 1
f2tsg 2 s0
+ 1
g2ts
)−1 = ( g2ts + f 2 tsg 2 s0
f2tsg 2 s0g 2 ts
)−1 = f2tsg 2 s0g 2 ts
g2ts + f 2 tsg 2 s0
⇒Var[xs|xt, x0] = 1
f2ts Var[µ|xt, x0] =
g2s0g 2 ts
g2ts + f 2 tsg 2 s0
= g2s0g 2 ts
g2t0 = g̃2ts
and mean E[µ|xt, x0] = ( 1
f2tsg 2 s0
+ 1
g2ts )−1( ft0x0 f2tsg 2 s0 + xt g2ts ) = ft0g 2 tsx0 + f 2 tsg 2 s0xt g2ts + f 2 tsg 2 s0 = ft0g 2 tsx0 + f 2 tsg 2 s0xt g2t0
⇒E[xs|xt, x0] = 1
fts E[µ|xt, x0] =
ft0 fts g2tsx0 + ftsg 2 s0xt
g2t0 = fs0g
2 tsx0 + ftsg 2 s0xt
g2t0 = f̃ts(xt, x0).
A.3 REWEIGHTED ELBO RESULTS
While the focus of our work is likelihood, we report FID scores for the sake of completeness, as well as to show the adaptability of our method via reweighting to focus on sample quality, as mentioned in Section 5.1.
To choose a reweighting scheme for each L(t, s) term that takes into account both t and s, Kingma et al. (2021) show that choosing discretized terms
Lw(t, s) = [ − ∫ t s w(u)SNR′(u)du ] ‖x0 − x̂0(xt, t)‖2 (18)
ensures that at the limit of infinite timesteps the continuous ELBO becomes − 12Eq ∫ 1 0 w(t)SNR′(t)‖x0 − x̂0(xt, t)‖2dt (where SNR(t) = f 2 t0 g2t0 , and a constant w(t) = 1 yields the unweighted ELBO).
While choosing w(t) = − SNR(t)SNR′(t) (which is Lsimple at the limit) did not work, we find that choosing w(t) = − 1SNR′(t) (which leads to a continuous objective of unweighted mean square errors and Lw(t, s) = (t− s)‖x0 − x̂0(xt, t)‖2) allows our algorithm to outperform DDPM FID scores and achieve similar scores to DDIM (Song et al., 2020). We call this “MSE reweighting” and include FID scores in Table 5, comparing to DDPM but also DDIM(η = 0) which does not admit likelihood computation but has been shown to be one of the strongest FID baselines in the few-step regime. The FID scores were estimated with 50,000 model and training data samples, as is standard in the literature.
A.4 NOTE ON SAMPLING STRATEGIES FOR FAIR COMPARISON
Finally, we discuss an alternative approach to that used in the figures of the paper for producing more qualitatively comparable samples. Given a fixed budget K, Figures 4 and 5 are produced to generate “comparable” samples across different strides by fixing all the standard Gaussian vectors in the sampling chain. However, another approach that allows to compare samples across different budgets is to fix a single Brownian motion trajectory on all T steps, and using discretizations (based on any stride) of this single random trajectory to generate the samples. We empirically find, however, that the former approach tends to produce much more similar images (see Figures 7 and 8 below). We suspect that the use of different strides (and hence different random directions from the fixed random trajectory) along with the very chaotic, non-linear behavior of DDPMs is the cause of this behavior. | 1. How does the paper improve computational efficiency in DDPMs?
2. Is the suggested technique reasonable, and what are its limitations?
3. How does the paper demonstrate the effectiveness of the proposed method?
4. What are some weaknesses in the presentation, particularly regarding Figure 5?
5. How could the authors improve their approach to better understand the effect of increasing the number of steps?
6. What is the main contribution of this paper, and how does it differ from other dynamic programming algorithms?
7. Can the procedure suggested in this paper be applied to training expensive models, and if so, how?
8. Does the paper have any positive ethical impacts, such as reducing climate change by improving computational efficiency?
9. Are there any minor issues with the paper's formatting or content, such as inconsistent notation or lacking definitions? | Summary Of The Paper
Review | Summary Of The Paper
Samples are generated from DDPMs by solving an SDE (often in "discrete time", which is used to refer to specifically the Euler--Maruyama discretisation). This necessitates a choice for where to make numerical steps. Each choice of step locations has a corresponding ELBO. This paper demonstrates that (on a pretrained model) the optimal ELBO may be obtained via a dynamic programming algorithm for the location of the steps.
Review
The paper is very clearly written, and I enjoyed reading this paper.
The main attraction of this paper is the dramatic increase in computational efficiency -- the authors discuss one example in which 4000 steps in the SDE solver are replaced with merely 32 steps. [I will tend to refer to things as being steps of an SDE solver, even in the discrete case, since that's basically what's going on.] This is certainly a dramatic claim, as the high computational cost of DDPMs has so far been one of their major limiting factors.
The suggested technique seems to be mostly reasonable. Overall the dramatic reduction in steps feels "too good to be true" -- a sentiment that is largely borne out by Section 5.1, in which it is demonstrated that improving the ELBO does not necessarily imply improving the FID. As the authors note, it is multiple to derive multiple valid ELBOs, so this is a case in which optimising the ELBO need not imply actually improving the model.
Overall, my take on this paper is that speed is improved, but it is hit-and-miss whether model performance is compromised whilst doing so. This is reflected in my middling-acceptance score. With some refinement I could see the techniques this paper proposes being of great utility.
Figure 5
One meaningful weakness in the presentation is Figure 5, in which I think different Brownian sample paths were used to generate each image. I do note that the text claims that the same random seed was used, but the variety -- both within each group-of-steps, and between each group-of-steps, means I am skeptical. My guess is that (a) different Brownian sample paths were used for each group of steps, and (b) within each group of steps, "using the same random seed" does not actually refer to using the same Brownian motion; rather it refers to using the same increments (each of which are rescaled by
α
,
σ
or
g
, depending on your notation). This is not at all the same thing as using the same Brownian motion.
The appropriate thing to do would be to use the same continuous-time Brownian motion sample for every single picture shown in Figure 5. Every time a point is queried (presumably nearly always at a point that it has not been queried at before, as different step schemes may place steps is very different places), then a Brownian bridge should be constructed between the two samples already observed either side of it.
The authors have not released code so I cannot see what library they are using themselves, but the above procedure may easily be done using the BrownianInterval of the torchsde library [1]. Make sure to use a single BrownianInterval object for the entirety of generating a figure (recreating a new one at any point would be a mistake, as it is deterministic only up to both its seed and the points it has already been queried at). (To give the appropriate references: the "Brownian Interval" was introduced in [2], as an improvement of the "Virtual Brownian Tree" of [3].)
If the above procedure is followed then I would expect the generated samples to much more closely resemble each in other, and in doing so be able to better understand the effect of increasing the number of steps. (Which is, after all, central to this paper.)
Other remarks
Equation (16) is clearly central to the paper. However, it pretty much comes out of nowhere. (At least for the reader who doesn't hold all the mathematics of DDPMs in their head.) I think that a derivation would be a meaningful improvement to the paper.
The dynamic programming algorithm outlined in Section 4.2 feels essentially standard -- besides Dijkstra's algorithm, it also seems very reminiscent of dynamic time warping. I regard the main contribution of this paper as the identification that step locations can be chosen via DP; not the algorithm itself.
The entire paper is framed only in the context of inference. I speculate that it might also be useful in the context of training: minimising training costs, especially for expensive models such as these, is a topic of great importance. Perhaps the procedure suggested in this paper could be re-run every N training steps, for some N?
Ethics statement
I would have thought that improving the computational efficiency of costly models would have some (perhaps small) positive impact on the pressing issue of climate change. It seems a bit perverse that this positive ethical impact is not discussed in the ethics statement.
Minor points
D
K
L
never has brackets around its arguments -- e.g. it's just
D
K
L
p
(
x
)
q
(
x
)
rather than
D
K
L
(
p
(
x
)
,
q
(
x
)
)
or
D
K
L
(
p
(
x
)
|
|
q
(
x
)
)
.
Page 4: The abbreviation "i.e." is usually discouraged in academic writing.
Algorithms 1 and 2: These are a weird a mix of pseudocode and Python. I think it would be preferred to pick just one. (Especially as they rely on behaviour specific to NumPy, such as indexing by None.)
I am not convinced how meaningful the discussion in Section 4.3 really is. It points out that
O
(
T
)
forward passes are required. As each forward pass takes
O
(
T
)
work then overall
O
(
T
2
)
work is required -- exactly as expected. What is new here?
I don't think "BPD" (page 7) is defined.
References
[1] Li. "torchsde" https://github.com/google-research/torchsde
[2] Kidger et al. "Efficient and Accurate Gradients for Neural SDEs" NeurIPS 2021 https://arxiv.org/abs/2105.13493
[3] Li et al. "Scalable Gradients for Stochastic Differential Equations" AISTATS 2020 https://arxiv.org/abs/2001.01328 |
ICLR | Title
Adversarial Vulnerability of Neural Networks Increases with Input Dimension
Abstract
Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. For most current network architectures, we prove that the `1-norm of these gradients grows as the square root of the input size. These nets therefore become increasingly vulnerable with growing image size. Our proofs rely on the network’s weight distribution at initialization, but extensive experiments confirm that our conclusions still hold after usual training.
1 INTRODUCTION
Following the work of Goodfellow et al. (2015), Convolutional Neural Networks (CNNs) have been found vulnerable to adversarial examples: an adversary can drive the performance of state-of-the art CNNs down to chance level with imperceptible changes of the inputs. A number of studies have tried to address this issue, but only few have stressed that, because adversarial examples are essentially small input changes that create large output variations, they are inherently caused by large gradients of the neural network with respect to its inputs. Of course, this view, which we will focus on here, assumes that the network and loss are differentiable. It has the advantage to yield a large body of specific mathematical tools, but might not be easily extendable to masked gradients, non-smooth models or the 0-1-loss. Nevertheless, our conclusions might even hold for non-smooth models, given that the latter can often be viewed as smooth at a coarser level.
Contributions. More specifically, we provide theoretical and empirical arguments supporting the existence of a monotonic relationship between the gradient norm of the training objective (of a differentiable classifier) and its adversarial vulnerability. Evaluating this norm based on the weight statistics at initialization, we show that CNNs and most feed-forward networks, by design, exhibit increasingly large gradients with input dimension d, almost independently of their architecture. That leaves them increasingly vulnerable to adversarial noise. We corroborate our theoretical results by extensive experiments. Although some of those experiments involve adversarial regularization schemes, our goal is not to advocate a new adversarial defense (these schemes are already known), but to show how their effect can be explained by our first order analysis. We do not claim to explain all aspects of adversarial vulnerability, but we claim that our first order argument suffices to explain a significant part of the empirical findings on adversarial vulnerability. This calls for researching the design of neural network architectures with inherently smaller gradients and provides useful guidelines to practitioners and network designers.
2 FROM ADVERSARIAL EXAMPLES TO LARGE GRADIENTS
Suppose that a given classifier ϕ classifies an image x as being in category ϕ(x). An adversarial image is a small modification of x, barely noticeable to the human eye, that suffices to fool the classifier into predicting a class different from ϕ(x). It is a small perturbation of the inputs, that creates a large variation of outputs. Adversarial examples thus seem inherently related to large gradients of the network. A connection, that we will now clarify. Note that visible adversarial examples sometimes appear in the literature, but we deliberately focus on imperceptible ones.
Adversarial vulnerability and adversarial damage. In practice, an adversarial image is constructed by adding a perturbation δ to the original image x such that ‖δ‖ ≤ for some (small) number and a given norm ‖·‖ over the input space. We call the perturbed input x+ δ an -sized ‖·‖-attack and say that the attack was successful when ϕ(x+ δ) 6= ϕ(x). This motivates Definition 1. Given a distribution P over the input-space, we call adversarial vulnerability of a classifier ϕ to an -sized ‖·‖-attack the probability that there exists a perturbation δ of x such that
‖δ‖ ≤ and ϕ(x) 6= ϕ(x+ δ) . (1)
We call the average increase-after-attack Ex∼P [∆L] of a loss L the (L-) adversarial damage (of the classifier ϕ to an -sized ‖·‖-attack).
When L is the 0-1-loss L0/1, adversarial damage is the accuracy-drop after attack. The 0-1-loss damage is always smaller than adversarial vulnerability, because vulnerability counts all class-changes of ϕ(x), whereas some of them may be neutral to adversarial damage (e.g. a change between two wrong classes). The L0/1-adversarial damage thus lower bounds adversarial vulnerability. Both are even equal when the classifier is perfect (before attack), because then every change of label introduces an error. It is hence tempting to evaluate adversarial vulnerability with L0/1-adversarial damage.
From ∆L0/1 to ∆L and to ∂xL. In practice however, we do not train our classifiers with the non-differentiable 0-1-loss but use a smoother loss L, such as the cross-entropy loss. For similar reasons, we will now investigate the adversarial damage Ex [∆L(x, c)] with loss L rather than L0/1. Like for Goodfellow et al. (2015); Lyu et al. (2015); Sinha et al. (2018) and many others, a classifier ϕ will hence be robust if, on average over x, a small adversarial perturbation δ of x creates only a small variation δL of the loss. Now, if ‖δ‖ ≤ , then a first order Taylor expansion in shows that
δL = max δ : ‖δ‖≤ |L(x+ δ, c)− L(x, c)| ≈ max δ : ‖δ‖≤ |∂xL · δ| = |||∂xL|||, (2)
where ∂xL denotes the gradient of L with respect to x, and where the last equality stems from the definition of the dual norm |||·||| of ‖·‖. Now two remarks. First: the dual norm only kicks in because we let the input noise δ optimally adjust to the coordinates of ∂xL within its -constraint. This is the brand mark of adversarial noise: the different coordinates add up, instead of statistically canceling each other out as they would with random noise. For example, if we impose that ‖δ‖2 ≤ , then δ will strictly align with ∂xL. If instead ‖δ‖∞ ≤ , then δ will align with the sign of the coordinates of ∂xL. Second remark: while the Taylor expansion in (2) becomes exact for infinitesimal perturbations, for finite ones it may actually be dominated by higher-order terms. Our experiments (Figures 1 & 2) however strongly suggest that in practice the first order term dominates the others. Now, remembering that the dual norm of an `p-norm is the corresponding `q-norm, and summarizing, we have proven Lemma 2. At first order approximation in , an -sized adversarial attack generated with norm ‖·‖ increases the loss L at point x by |||∂xL|||, where |||·||| is the dual norm of ‖·‖. In particular, an -sized `p-attack increases the loss by ‖∂xL‖q where 1 ≤ p ≤ ∞ and 1p + 1q = 1.
Consequently, the adversarial damage of a classifier with lossL to -sized attacks generated with norm ‖·‖ is Ex|||∂xL|||. This is valid only at first order, but it proves that at least this kind of first-order vulnerability is present. We will see that the first-order predictions closely match the experiments, and that this insight helps protecting even against iterative (non-first-order) attack methods (Figure 1).
Calibrating the threshold to the attack-norm ‖·‖. Lemma 2 shows that adversarial vulnerability depends on three main factors: (i) ‖·‖ , the norm chosen for the attack (ii) , the size of the attack, and (iii) Ex|||∂xL||| , the expected dual norm of ∂xL. We could see Point (i) as a measure of our sensibility to image perturbations, (ii) as our sensibility threshold, and (iii) as the classifier’s expected marginal sensibility to a unit perturbation. Ex|||∂xL||| hence intuitively captures the discrepancy between our perception (as modeled by ‖·‖) and the classifier’s perception for an input-perturbation of small size . Of course, this viewpoint supposes that we actually found a norm ‖·‖ (or more generally a metric) that faithfully reflects human perception – a project in its own right, far beyond the scope of this paper. However, it is clear that the threshold that we choose should depend on the norm ‖·‖ and hence on the input-dimension d. In particular, for a given pixel-wise order of magnitude of the perturbations δ, the `p-norm of the perturbation will scale like d1/p. This suggests to write the threshold p used with `p-attacks as:
p = ∞ d 1/p , (3)
where ∞ denotes a dimension-independent constant. In Appendix D we show that this scaling also preserves the average signal-to-noise ratio ‖x‖2 / ‖δ‖2, both across norms and dimensions, so that p could correspond to a constant human perception-threshold. With this in mind, the impatient reader may already jump to Section 3, which contains our main contributions: the estimation of Ex‖∂xL‖q for standard feed-forward nets. Meanwhile, the rest of this section shortly discusses two straightforward defenses that we will use later and that further illustrate the role of gradients.
A new old regularizer. Lemma 2 shows that the loss of the network after an 2 -sized ‖·‖-attack is
L ,|||·|||(x, c) := L(x, c) + 2 |||∂xL||| . (4)
It is thus natural to take this loss-after-attack as a new training objective. Here we introduced a factor 2 for reasons that will become clear in a moment. Incidentally, for ‖·‖ = ‖·‖2, this new loss reduces to an old regularization-scheme proposed by Drucker & LeCun (1991) called double-backpropagation. At the time, the authors argued that slightly decreasing a function’s or a classifier’s sensitivity to input perturbations should improve generalization. In a sense, this is exactly our motivation when defending against adversarial examples. It is thus not surprising to end up with the same regularization term. Note that our reasoning only shows that training with one specific norm |||·||| in (4) helps to protect against adversarial examples generated from ‖·‖. A priori, we do not know what will happen for attacks generated with other norms; but our experiments suggest that training with one norm also protects against other attacks (see Figure 2 and Section 4.1).
Link to adversarially-augmented training. In (1), designates an attack-size threshold, while in (4), it is a regularization-strength. Rather than a notation conflict, this reflects an intrinsic duality between two complementary interpretations of , which we now investigate further. Suppose that, instead of using the loss-after-attack, we augment our training set with -sized ‖·‖-attacks x + δ, where for each training point x, the perturbation δ is generated on the fly to locally maximize the loss-increase. Then we are effectively training with
L̃ ,‖·‖(x, c) := 1
2 (L(x, c) + L(x+ δ, c)) , (5)
where by construction δ satisfies (2). We will refer to this technique as adversarially augmented training. It was first introduced by Goodfellow et al. (2015) with ‖·‖ = ‖·‖∞ under the name of FGSM1-augmented training. Using the first order Taylor expansion in of (2), this ‘old-plus-postattack’ loss of (5) simply reduces to our loss-after-attack, which proves
Proposition 3. Up to first-order approximations in , L̃ ,‖·‖ = L ,|||·||| . Said differently, for small enough , adversarially-augmented training with -sized ‖·‖-attacks amounts to penalizing the dual norm |||·||| of ∂xL with weight /2. In particular, double-backpropagation corresponds to training with `2-attacks, while FGSM-augmented training corresponds to an `1-penalty on ∂xL.
This correspondence between training with perturbations and using a regularizer can be compared to Tikhonov regularization: Tikhonov regularization amounts to training with random noise Bishop (1995), while training with adversarial noise amounts to penalizing ∂xL. Section 4.1 verifies the correspondence between adversarial augmentation and gradient regularization empirically, which also strongly suggests the empirical validity of the first-order Taylor expansion in (2).
3 ESTIMATING ‖∂xL‖q TO EVALUATE ADVERSARIAL VULNERABILITY
In this section, we evaluate the size of ‖∂xL‖q for standard neural network architectures. We start with fully-connected networks, and finish with a much more general theorem that, not only encompasses CNNs (with or without strided convolutions), but also shows that the gradient-norms are essentially independent of the network topology. We start our analysis by showing how changing q affects the size of ‖∂xL‖q . Suppose for a moment that the coordinates of ∂xL have typical magnitude |∂xL|. Then ‖∂xL‖q scales like d1/q|∂xL|. Consequently
p ‖∂xL‖q ∝ p d1/q |∂xL| ∝ d |∂xL| . (6) 1FGSM = Fast Gradient Sign Method
This equation carries two important messages. First, we see how ‖∂xL‖q depends on d and q. The dependence seems highest for q = 1. But once we account for the varying perceptibility threshold p ∝ d1/p, we see that adversarial vulnerability scales like d · |∂xL|, whatever `p-norm we use. Second, (6) shows that to be robust against any type of `p-attack at any input-dimension d, the average absolute value of the coefficients of ∂xL must grow slower than 1/d. Now, here is the catch, which brings us to our core insight.
3.1 CORE IDEA: ONE NEURON WITH MANY INPUTS
In order to preserve the activation variance of the neurons from layer to layer, the neural weights are usually initialized with a variance that is inversely proportional to the number of inputs per neuron. Imagine for a moment that the network consisted only of one output neuron o linearly connected to all input pixels. For the purpose of this example, we assimilate o and L. Because we initialize the weights with a variance of 1/d, their average absolute value |∂xo| ≡ |∂xL| grows like 1/ √ d, rather than the required 1/d. By (6), the adversarial vulnerability ‖∂xo‖q ≡ ‖∂xL‖q therefore increases like d/ √ d = √ d.
This toy example shows that the standard initialization scheme, which preserves the variance from layer to layer, causes the average coordinate-size |∂xL| to grow like 1/ √ d instead of 1/d. When an `∞-attack tweaks its -sized input-perturbations to align with the coordinate-signs of ∂xL, all coordinates of ∂xL add up in absolute value, resulting in an output-perturbation that scales like √ d and leaves the network increasingly vulnerable with growing input-dimension.
3.2 GENERALIZATION TO DEEP NETWORKS
Our next theorems generalize the previous toy example to a very wide class of feedforward nets with ReLU activation functions. For illustration purposes, we start with fully connected nets and only then proceed to the broader class, which includes any succession of (possibly strided) convolutional layers. In essence, the proofs iterate our insight on one layer over a sequence of layers. They all rely on the following set (H) of hypotheses:
H1 Non-input neurons are followed by a ReLU killing half of its inputs, independently of the weights. H2 Neurons are partitioned into layers, meaning groups that each path traverses at most once. H3 All weights have 0 expectation and variance 2/(in-degree) (‘He-initialization’). H4 The weights from different layers are independent. H5 Two distinct weights w,w′ from a same node satisfy E [ww′] = 0.
If we follow common practice and initialize our nets as proposed by He et al. (2015), then H3-H5 are satisfied at initialization by design, while H1 is usually a very good approximation (Balduzzi et al., 2017). Note that such i.i.d. weight assumptions have been widely used to analyze neural nets and are at the heart of very influential and successful prior work (e.g., equivalence between neural nets and Gaussian processes as pioneered by Neal 1996). Nevertheless, they do not hold after training. That is why all our statements in this section are to be understood as orders of magnitudes that are very well satisfied at initialization in theory and in practice, and that we will confirm experimentally after training in Section 4. Said differently, while our theorems rely on the statistics of neural nets at initialization, our experiments confirm their conclusions after training.
Theorem 4 (Vulnerability of Fully Connected Nets). Consider a succession of fully connected layers with ReLU activations which takes inputs x of dimension d, satisfies assumptions (H), and outputs logits fk(x) that get fed to a final cross-entropy-loss layer L. Then the coordinates of ∂xfk grow like 1/ √ d, and
‖∂xL‖q ∝ d 1 q− 1 2 and p ‖∂xL‖q ∝
√ d . (7)
These networks are thus increasingly vulnerable to `p-attacks with growing input-dimension.
Theorem 4 is a special case of the next theorem, which will show that the previous conclusions are essentially independent of the network-topology. We will use the following symmetry assumption on the neural connections. For a given path p, let the path-degree dp be the multiset of encountered in-degrees along path p. For a fully connected network, this is the unordered sequence of layer-sizes
preceding the last path-node, including the input-layer. Now consider the multiset {dp}p∈P(x,o) of all path-degrees when p varies among all paths from input x to output o. The symmetry assumption (relatively to o) is
(S) All input nodes x have the same multiset {dp}p∈P(x,o) of path-degrees from x to o. Intuitively, this means that the statistics of degrees encountered along paths to the output are the same for all input nodes. This symmetry assumption is exactly satisfied by fully connected nets, almost satisfied by CNNs (up to boundary effects, which can be alleviated via periodic or mirror padding) and exactly satisfied by strided layers, if the layer-size is a multiple of the stride.
Theorem 5 (Vulnerability of Feedforward Nets). Consider any feed-forward network with linear connections and ReLU activation functions. Assume the net satisfies assumptions (H) and outputs logits fk(x) that get fed to the cross-entropy-loss L. Then ‖∂xfk‖2 is independent of the input dimension d and 2 ‖∂xL‖2 ∝ √ d. Moreover, if the net satisfies the symmetry assumption (S), then
|∂xfk| ∝ 1/ √ d and (7) still holds: ‖∂xL‖q ∝ d 1 q− 1 2 and p ‖∂xL‖q ∝ √ d.
Theorems 4 and 5 are proven in Appendix B. The main proof idea is that in the gradient norm computation, the He-initialization exactly compensates the combinatorics of the number of paths in the network, so that this norm becomes independent of the network topology. In particular, we get
Corollary 6 (Vulnerability of CNNs). In any succession of convolution and dense layers, strided or not, with ReLU activations, that satisfies assumptions (H) and outputs logits that get fed to the cross-entropy-loss L, the gradient of the logit-coordinates scale like 1/ √ d and (7) is satisfied. It is hence increasingly vulnerable with growing input-resolution to attacks generated with any `p-norm.
Appendix A shows that the network gradient are dampened when replacing strided layers by average poolings, essentially because average-pooling weights do not follow the He-init assumption H3.
4 EMPIRICAL RESULTS
In Section 4.1, we empirically verify the validity of the first-order Taylor approximation made in (2) (Fig.1), for example by checking the correspondence between loss-gradient regularization and adversarially-augmented training (Fig.2). Section 4.2 then empirically verifies that both the average `1-norm of ∂xL and the adversarial vulnerability grow like √ d as predicted by Corollary 6. For all experiments, we approximate adversarial vulnerability using various attacks of the Foolboxpackage (Rauber et al., 2017). We use an `∞ attack-threshold of size ∞ = 0.005 (and later 0.002) which, for pixel-values ranging from 0 to 1, is completely imperceptible but suffices to fool the classifiers on a significant proportion of examples. This ∞-threshold should not be confused with the regularization-strengths appearing in (4) and (5), which will be varied in some experiments.
4.1 FIRST-ORDER APPROXIMATION, GRADIENT PENALTY, ADVERSARIAL AUGMENTATION
We train several CNNs with same architecture to classify CIFAR-10 images (Krizhevsky, 2009). For each net, we use a specific training method with a specific regularization value . The training methods used were `1- and `2-penalization of ∂xL (Eq. 4), adversarial augmentation with `∞- and `2- attacks (Eq. 5), projected gradient descent (PGD) with randomized starts (7 steps per attack with step-size = .2 ∞; see Madry et al. 2018) and the cross-Lipschitz regularizer (Eq. 17 in Appendix C). We then test the adversarial vulnerability of each trained network using the following attack-methods: single-step `∞- (FGSM) and `2-attacks, iterative `∞- (PGD) and `2-attacks, and DeepFool attacks (Moosavi-Dezfooli et al., 2016). All networks have 6 ‘strided convolution→ batchnorm→ ReLU’ layers with strides [1, 2, 2, 2, 2, 2] respectively and 64 output-channels each, followed by a final fully-connected linear layer. Results are summarized in Figures 1 and 2. Figure 1 fixes the training method – gradient `1-regularization – and plots the obtained adversarial vulnerabilities for various attacks types. Figure 2 fixes the attack type – iterative `∞-attacks – but plots the curves obtained for various training methods. Note that our goal here is not to advocate one defense over another, but rather to check the validity of the Taylor expansion, and empirically verify that first order terms (i.e., gradients) suffice to explain much of the observed adversarial vulnerability. Similarly, our goal in testing several attacks (Figure 1) is not to present a specifically strong one, but rather to verify that for all attacks, the trends are the same: the vulnerability grows with increasing gradients.
Validity of first order expansion. The following observations support the validity of the first order Taylor expansion in (2) and suggest that it is a crucial component of adversarial vulnerability: (i) the efficiency of the first-order defense against iterative (non-first-order) attacks (Fig.1a); (ii) the striking similarity between the PGD curves (adversarial augmentation with iterative attacks) and the other adversarial training training curves (one-step attacks/defenses); (iii) the functional-like dependence between any approximation of adversarial vulnerability and Ex‖∂xL‖1 (Fig.1b), and its independence on the training method (Fig.2d). (iv) the excellent correspondence between the gradient-regularization and adversarial training curves (see next paragraph). Said differently, adversarial examples seem indeed to be primarily caused by large gradients of the classifier as captured via the induced loss. 2
2On Figure 1, the two `∞-attacks seem more efficient than the others, because we chose an `∞ perturbation threshold ( ∞). With an `2-threshold it is the opposite (see Figure 7, Appendix F).
Illustration of Proposition 3. The upper row of Figure 2 plots Ex‖∂xL1‖, adversarial vulnerability and accuracy as a function of d1/p. The excellent match between the adversarial augmentation curve with p = ∞ (p = 2) and its gradient-regularization dual counterpart with q = 1 (resp. q = 2) illustrates the duality between as a threshold for adversarially-augmented training and as a regularization constant in the regularized loss (Proposition 3). It also supports the validity of the first-order Taylor expansion in (2).
Confirmation of (3). Still on the upper row, the curves for p =∞, q = 1 have no reason to match those for p = q = 2 when plotted against , because -threshold is relative to a specific attack-norm. However, (3) suggested that the rescaled thresholds d1/p may approximately correspond to a same ‘threshold-unit’ across `p-norms and across dimension. This is well confirmed by the upper row plots: by rescaling the x-axis, the p = q = 2 and q = 1, p =∞ curves get almost super-imposed. Accuracy-vs-Vulnerability Trade-Off. Merging Figures 2b and 2c by taking out , Figure 2f shows that all gradient regularization and adversarial training methods yield equivalent accuracyvulnerability trade-offs. Incidentally, for higher penalization values, these trade-offs appear to be much better than those given by cross Lipschitz regularization.
The penalty-norm does not matter. We were surprised to see that on Figures 2d and 2f, the L ,q curves are almost identical for q = 1 and 2. This indicates that both norms can be used interchangeably in (4) (modulo proper rescaling of via (3)), and suggests that protecting against a specific attacknorm also protects against others. (6) may provide an explanation: if the coordinates of ∂xL behave like centered, uncorrelated variables with equal variance –which follows from assumptions (H) –, then the `1- and `2-norms of ∂xL are simply proportional. Plotting Ex‖∂xL(x)‖2 against Ex‖∂xL(x)‖1 in Figure 2e confirms this explanation. The slope is independent of the training method. Therefore, penalizing ‖∂xL(x)‖1 during training will not only decrease Ex‖∂xL‖1 (as shown in Figure 2a), but also drive down Ex‖∂xL‖2 and vice-versa.
4.2 VULNERABILITY GROWS WITH INPUT RESOLUTION
Theorems 4-5 and Corollary 6 predict a linear growth of the average `1-norm of ∂xL with the square root of the input dimension d, and therefore also of adversarial vulnerability (Lemma 2). To test these predictions, we upsampled the CIFAR-10 images (of size 3 x 32 x 32) by copying pixels so as to get 4 datasets with, respectively, 32, 64, 128 and 256 pixels per edge. We then trained a CNN on each dataset
and computed their adversarial vulnerability (with iterative `∞-attacks, threshold ∞ = .002) and average ‖∂xL‖1 over the last 20 epochs on the same held-out test-dataset. This gave us 2 x 20-values per net and imagesize, summarized in Figure 3. The dashed-lines follow their medians and the errorbars show their 10th and 90th quantiles. As predicted by our theorems, both ‖∂xL‖1 and adversarial vulnerability grow approximately linearly with √ d. We also ran a similar experiment on downsized ImageNet images, where we train several identical nets per image-size rather than just one. Conclusions are unchanged. See Appendix E.
All networks had exactly the same amount of parameters and very similar structure across the various input-resolutions. The CNNs were a succession of 8 ‘convolution→ batchnorm→ ReLU’ layers with 64 output channels, followed by a final full-connection to the 12 logit-outputs. We used 2× 2- max-poolings after the convolutions of layers 2,4, 6 and 8, and a final max-pooling after layer 8 that fed only 1 neuron per channel to the fully-connected layer. To ensure that the convolution-kernels cover similar ranges of the images across each of the 32, 64, 128 and 256 input-resolutions, we respectively dilated all convolutions (‘à trous’) by a factor 1, 2, 4 and 8.
5 DISCUSSIONS
5.1 IMPLICATIONS: WHY PRIOR VULNERABILITY MAY MATTER
Our theoretical results show that the priors of classical neural networks yield vulnerable functions because of naturally high gradients. And our experiments (Fig 3&6) suggest that usual training does not escape these prior properties. But how may these insights help understanding the vulnerability of robustly trained networks? Clearly, to be successful, robust training algorithms must escape ill-behaved priors, which explains why most methods (e.g. FGSM, PGD) are essentially gradient penalization techniques. But, MNIST aside, even state-of-the-art methods largely fail at protecting current network architectures (Madry et al., 2018), and understanding why is motivation to this and many other papers. Interestingly, Schmidt et al. (2018) recently noticed that those methods actually do protect the nets on training examples, but fail to generalize to the test set. They hence conclude that state-of-the-art robustification algorithms work, but need more data. Alternatively however, when generalization fails, one can also reduce the model’s complexity. Large fully connected nets for example typically fail to generalize to out-of-sample examples: getting similar accuracies than CNNs would need prohibitively many training points. Similarly, Schmidt et al.’s observations may suggest that, outside the training points, networks tend to recover their prior properties, i.e. naturally large gradients. Figure 4 corroborates this hypothesis. It plots the evolution over training epochs of the `1-gradient-norms of the CNNs from Section 4.2 (Fig 3) on the training and test sets respectively. The discrepancy is unmistakable: after a brief initialization phase, the norms decrease on the training set, but increase on the test set. They are moreover almost input-dimension independent on the training set, but scale as √ d on the test set (as seen in Fig 3) up to respectively 2, 4, 8 and 16 times the training set values. These observations suggest that, with the current amount of data, tackling adversarial vulnerability may require new architectures with inherently smaller gradients. Searching these architectures among those with well-behaved prior-gradients seems a reasonable start, where our theoretical results may prove very useful.3
5.2 RELATED LITERATURE
On network vulnerability. Goodfellow et al. (2015) already stressed that adversarial vulnerability increases with growing dimension d. But their argument only relied on a linear ‘one-output-to-manyinputs’-model with dimension-independent weights. They therefore concluded on a linear growth of adversarial vulnerability with d. In contrast, our theory applies to almost any standard feed-forward architecture (not just linear), and shows that, once we adjust for the weight’s dimension-dependence, adversarial vulnerability increases like √ d (not d), almost independently of the architecture. Nevertheless, our experiments confirm Goodfellow et al.’s idea that our networks are “too linear-like”, in the sense that a first-order Taylor expansion is indeed sufficient to explain the adversarial vulnerability of neural networks. As suggested by the one-output-to-many-inputs model, the culprit is that growing
3Appendix A investigates such a preliminary direction by introducing average poolings, which have a weight-size 1/in−channels rather than the typical 1/√in−channels of the other He-initialized weights.
dimensionality gives the adversary more and more room to ‘wriggle around’ with the noise and adjust to the gradient of the output neuron. This wriggling, we show, is still possible when the output is connected to all inputs only indirectly, even when no neuron is directly connected to all inputs, like in CNNs. This explanation of adversarial vulnerability is independent of the intrinsic dimensionality or geometry of the data (compare to Amsaleg et al. 2017; Gilmer et al. 2018). Finally, let us mention that Fawzi et al. (2016) show a close link between the vulnerability to small worst-case perturbation (as studied here) and larger average perturbations. Our findings on the adversarial vulnerability NNs to small perturbation could thus be translated accordingly.
On robustification algorithms. Incidentally, Goodfellow et al. (2015) also already relate adversarial vulnerability to large gradients of the loss L, an insight at the very heart of their FGSM-algorithm. They however do not propose any explicit penalizer on the gradient of L other than indirectly through adversarially-augmented training. Conversely, Ross & Doshi-Velez (2018) propose the old double-backpropagation to robustify networks but make no connection to FGSM and adversarial augmentation. Lyu et al. (2015) discuss and use the connection between gradient-penalties and adversarial augmentation, but never actually compare both in experiments. This comparison however is essential to test the validity of the first-order Taylor expansion in (2), as confirmed by the similarity between the gradient-regularization and adversarial-augmentation curves in Figure 2. Hein & Andriushchenko (2017) derived yet another gradient-based penalty –the cross-Lipschitz-penalty– by considering (and proving) formal guarantees on adversarial vulnerability itself, rather than adversarial damage. While both penalties are similar in spirit, focusing on the adversarial damage rather than vulnerability has two main advantages. First, it achieves better accuracy-to-vulnerability ratios, both in theory and practice, because it ignores class-switches between misclassified examples and penalizes only those that reduce the accuracy. Second, it allows to deal with one number only, ∆L, whereas Hein & Andriushchenko’s cross-Lipschitz regularizer and theoretical guarantees explicitly involve all K logit-functions (and their gradients). See Appendix C. Penalizing network-gradients is also at the heart of contractive auto-encoders as proposed by Rifai et al. (2011), where it is used to regularize the encoder-features. Seeing adversarial training as a generalization method, let us also mention Hochreiter & Schmidhuber (1995), who propose to enhance generalization by searching for parameters in a “flat minimum region” of the loss. This leads to a penalty involving the gradient of the loss, but taken with respect to the weights, rather than the inputs. In the same vein, a gradientregularization of the loss of generative models also appears in Proposition 6 of Ollivier (2014), where it stems from a code-length bound on the data (minimum description length). More generally, the gradient regularized objective (4) is essentially the first-order approximation of the robust training objective max‖δ‖≤ L(x+ δ, c) which has a long history in math (Wald, 1945), machine learning (Xu et al., 2009) and now adversarial vulnerability (Sinha et al., 2018). Finally, Cisse et al. (2017) propose new network-architectures that have small gradients by design, rather than by special training: an approach that makes all the more sense, considering the conclusion of Theorems 4 and 5. For further details and references on adversarial attacks and defenses, we refer to Yuan et al. (2017).
6 CONCLUSION
For differentiable classifiers and losses, we showed that adversarial vulnerability increases with the gradients ∂xL of the loss, which is confirmed by the near-perfect functional relationship between gradient norms and vulnerability (Figures 1&2d). We then evaluated the size of ‖∂xL‖q and showed that, at initialization, usual feed-forward nets (convolutional or fully connected) are increasingly vulnerable to `p-attacks with growing input dimension d (the image-size), almost independently of their architecture. Our experiments show that, on the tested architectures, usual training escapes those prior gradient (and vulnerability) properties on the training, but not on the test set. Schmidt et al. (2018) suggest that alleviating this generalization gap requires more data. But a natural (complementary) alternative would be to search for architectures with naturally smaller gradients, and in particular, with well-behaved priors. Despite all their limitations (being only first-order, assuming a prior weight-distribution and a differentiable loss and architecture), our theoretical insights may thereby still prove to be precious future allies.
A EFFECTS OF STRIDED AND AVERAGE-POOLING LAYERS ON ADVERSARIAL VULNERABILITY
It is common practice in CNNs to use average-pooling layers or strided convolutions to progressively decrease the number of pixels per channel. Corollary 6 shows that using strided convolutions does not protect against adversarial examples. However, what if we replace strided convolutions by convolutions with stride 1 plus an average-pooling layer? Theorem 5 considers only randomly initialized weights with typical size 1/ √ in-degree. Average-poolings however introduce deterministic weights of size 1/(in-degree). These are smaller and may therefore dampen the input-to-output gradients and protect against adversarial examples. We confirm this in our next theorem, which uses a slightly modified version (H′) of (H) to allow average pooling layers. (H′) is (H), but where the He-init H3 applies to all weights except the (deterministic) average pooling weights, and where H1 places a ReLU on every non-input and non-average-pooling neuron.
Theorem 7 (Effect of Average-Poolings). Consider a succession of convolution layers, dense layers and n average-pooling layers, in any order, that satisfies (H′) and outputs logits fk(x). Assume the n average pooling layers have a stride equal to their mask size and perform averages over a1, ..., an nodes respectively. Then ‖∂xfk‖2 and |∂xfk| scale like 1/ √ a1 · · · an and 1/ √ d a1 · · · an respectively.
Proof in Appendix B.4. Theorem 7 suggest to try and replace any strided convolution by its non-strided counterpart, followed by an average-pooling layer. It also shows that if we systematically reduce the number of pixels per channel down to 1 by using only non-strided convolutions and average-pooling layers (i.e. d = ∏n i=1 ai), then all input-to-output gradients should become independent of d, thereby making the network completely robust to adversarial examples.
Our following experiments (Figure 5) show that after training, the networks get indeed robustified to adversarial examples, but remain more vulnerable than suggested by Theorem 7.
Experimental setup. Theorem 7 shows that, contrary to strided layers, average-poolings should decrease adversarial vulnerability. We tested this hypothesis on CNNs trained on CIFAR-10, with 6 blocks of ‘convolution → BatchNorm→ReLU’ with 64 output-channels, followed by a final average pooling feeding one neuron per channel to the last fully-connected linear layer. Additionally, after every second convolution, we placed a pooling layer with stride and mask-size (2, 2) (thus acting on 2× 2
neurons at a time, without overlap). We tested average-pooling, strided and max-pooling layers and trained 20 networks per architecture. Results are shown in Figure 5. All accuracies are very close, but, as predicted, the networks with average pooling layers are more robust to adversarial images than the others. However, they remain more vulnerable than what would follow from Theorem 7. We also noticed that, contrary to the strided architectures, their gradients after training are an order of magnitude higher than at initialization and than predicted. This suggests that assumptions (H) get more violated when using average-poolings instead of strided layers. Understanding why will need further investigations.
B PROOFS
B.1 PROOF OF PROPOSITION 3
Proof. Let δ be an adversarial perturbation with ‖δ‖ = 1 that locally maximizes the loss increase at point x, meaning that δ = arg max‖δ′‖≤1∂xL · δ′. Then, by definition of the dual norm of ∂xL we have: ∂xL · ( δ) = |||∂xL|||. Thus
L̃ ,‖·‖(x, c) = 1 2 (L(x, c) + L(x+ δ, c)) = 1 2 (2L(x, c) + |∂xL · δ|+ o(‖δ‖)) =
= L(x, c) + 2 |||∂xL|||+ o( ) = L ,|||·|||(x, c) + o( ) .
B.2 PROOF OF THEOREM 4
Proof. Let x designate a generic coordinate of x. To evaluate the size of ‖∂xL‖q, we will evaluate the size of the coordinates ∂xL of ∂xL by decomposing them into
∂xL = K∑ k=1 ∂L ∂fk ∂fk ∂x =: K∑ k=1 ∂kL ∂xfk,
where fk(x) denotes the logit-probability of x belonging to class k. We now investigate the statistical properties of the logit gradients ∂xfk, and then see how they shape ∂xL.
Step 1: Statistical properties of ∂xfk. Let P(x, k) be the set of paths p from input neuron x to output-logit k. Let p− 1 and p be two successive neurons on path p, and p̃ be the same path p but without its input neuron. Let wp designate the weight from p− 1 to p and ωp be the path-product ωp := ∏ p∈p̃ wp. Finally, let σp (resp. σp) be equal to 1 if the ReLU of node p (resp. if path p) is active for input x, and 0 otherwise.
As previously noticed by Balduzzi et al. (2017) using the chain rule, we see that ∂xfk is the sum of all ωp whose path is active, i.e. ∂xfk(x) = ∑ p∈P(x,k) ωpσp. Consequently:
EW,σ [ ∂xfk(x) 2 ] = ∑
p∈P(x,k) ∏ p∈p̃ EW [ w2p ] Eσ [ σ2p ]
= |P(x, k)| ∏ p∈p̃ 2 dp−1 1 2 = ∏ p∈p̃ dp · ∏ p∈p̃ 1 dp−1 = 1 d . (8)
The first equality uses H1 to decouple the expectations over weights and ReLUs, and then applies Lemma 10 of Appendix B.3, which uses H3-H5 to kill all cross-terms and take the expectation over weights inside the product. The second equality uses H3 and the fact that the resulting product is the same for all active paths. The third equality counts the number of paths from x to k and we conclude by noting that all terms cancel out, except dp−1 from the input layer which is d. Equation 8 shows that |∂xfk| ∝ 1/ √ d.
Step 2: Statistical properties of ∂kL and ∂xL. Defining qk(x) := e fk(x)∑K
h=1 e fh(x) (the probability of image x belonging to class k according to the network), we have, by definition of the cross-entropy loss, L(x, c) := − log qc(x), where c is the label of the target class. Thus:
∂kL(x) = { −qk(x) if k 6= c 1− qc(x) otherwise, and
∂xL(x) = (1− qc) ∂xfc(x) + ∑ k 6=c qk (−∂xfk(x)). (9)
Using again Lemma 10, we see that the ∂xfk(x) are K centered and uncorrelated variables. So ∂xL(x) is approximately the sum of K uncorrelated variables with zero-mean, and its total variance is given by ( (1 − qc)2 + ∑ k 6=c q 2 k ) /d. Hence the magnitude of ∂xL(x) is 1/ √ d for all x, so the `q-norm of the full input gradient is d1/q−1/2. (6) concludes.
Remark 1. Equation 9 can be rewritten as
∂xL(x) = K∑ k=1 qk(x) ( ∂xfc(x)− ∂xfk(x) ) . (10)
As the term k = c disappears, the norm of the gradients ∂xL(x) appears to be controlled by the total error probability. This suggests that, even without regularization, trying to decrease the ordinary
classification error is still a valid strategy against adversarial examples. It reflects the fact that when increasing the classification margin, larger gradients of the classifier’s logits are needed to push images from one side of the classification boundary to the other. This is confirmed by Theorem 2.1 of Hein & Andriushchenko (2017). See also (16) in Appendix C.
B.3 PROOF OF THEOREM 5
The proof of Theorem 5 is very similar to the one of Theorem 4, but we will need to first generalize the equalities appearing in (8). To do so, we identify the computational graph of a neural network to an abstract Directed Acyclic Graph (DAG) which we use to prove the needed algebraic equalities. We then concentrate on the statistical weight-interactions implied by assumption (H), and finally throw these results together to prove the theorem. In all the proof, o will designate one of the output-logits fk(x).
Lemma 8. Let x be the vector of inputs to a given DAG, o be any leaf-node of the DAG, x a generic coordinate of x. Let p be a path from the set of paths P(x, o) from x to o, p̃ the same path without node x, p a generic node in p̃, and dp be its input-degree. Then:∑
x∈x ∑ p̃∈P(x,o) ∏ p∈p̃ 1 dp = 1 (11)
Proof. We will reason on a random walk starting at o and going up the DAG by choosing any incoming node with equal probability. The DAG being finite, this walk will end up at an input-node x with probability 1. Each path p is taken with probability ∏ p∈p̃ 1 dp
. And the probability to end up at an input-node is the sum of all these probabilities, i.e. ∑ x∈x ∑ p∈P(x,o) ∏ p∈p d −1 p , which concludes.
The sum over all inputs x in (11) being 1, on average it is 1/d for each x, where d is the total number of inputs (i.e. the length of x). It becomes an equality under assumption (S): Lemma 9. Under the symmetry assumption (S), and with the previous notations, for any input x ∈ x: ∑
p∈P(x,o) ∏ p∈p̃ 1 dp = 1 d . (12)
Proof. Let us denote D(x, o) := {dp}x∈P(x,o). Each path p in P(x, o) corresponds to exactly one element dp in D(x, o) and vice-versa. And the elements dp of dp completely determine the product∏ p∈p̃ d −1 p . By using (11) and the fact that, by (S), the multiset D(x, o) is independent of x, we hence conclude ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 1 dp = ∑ x∈x ∑ dp∈D(x,o) ∏ dp∈dp 1 dp
= d ∑
dp∈D(x,o) ∏ dp∈dp 1 dp = 1 .
Now, let us relate these considerations on graphs to gradients and use assumptions (H). We remind that path-product ωp is the product ∏ p∈p̃ wp.
Lemma 10. Under assumptions (H), the path-products ωp, ωp′ of two distinct paths p and p′ starting from a same input node x, satisfy:
EW [ωp ωp′ ] = 0 and EW [ ω2p ] = ∏ p∈p̃ EW [ w2p ] .
Furthermore, if there is at least one non-average-pooling weight on path p, then EW [ωp] = 0.
Proof. Hypothesis H4 yields
EW [ ω2p ] = EW ∏ p∈p̃ w2p = ∏ p∈p̃ EW [ w2p ] .
Now, take two different paths p and p′ that start at a same node x. Starting from x, consider the first node after which p and p′ part and call p and p′ the next nodes on p and p′ respectively. Then the weights wp and wp′ are two weights of a same node. Applying H4 and H5 hence gives
EW [ωp ωp′ ] = EW [ ωp\p ωp′\p′ ] EW [wp wp′ ] = 0 .
Finally, if p has at least one non-average-pooling node p, then successively applying H4 and H3 yields: EW [ωp] = EW [ ωp\p ] EW [wp] = 0.
We now have all elements to prove Theorem 5.
Proof. (of Theorem 5) For a given neuron p in p̃, let p − 1 designate the previous node in p of p. Let σp (resp. σp) be a variable equal to 0 if neuron p gets killed by its ReLU (resp. path p is inactive), and 1 otherwise. Then:
∂xo = ∑
p∈P(x,o) ∏ p∈p̃ ∂p−1 p = ∑ p∈P(x,o) ωp σp
Consequently:
EW,σ [ (∂xo) 2 ] = ∑
p,p′∈P(x,o)
EW [ωp ωp′ ]Eσ [σpσp′ ]
= ∑
p∈P(x,o) ∏ p∈p̃ EW [ ω2p ] Eσ [ σ2p ]
(13)
= ∑
p∈P(x,o) ∏ p∈p̃ 2 dp 1 2 = 1 d ,
where the firs line uses the independence between the ReLU killings and the weights (H1), the second uses Lemma 10 and the last uses Lemma 9. The gradient ∂xo thus has coordinates whose squared expectations scale like 1/d. Thus each coordinate scales like 1/ √ d and ‖∂xo‖q like d1/2−1/q. Conclude on ‖∂xL‖q and p ‖∂xL‖q by using Step 2 of the proof of Theorem 4. Finally, note that, even without the symmetry assumption (S), using Lemma 8 shows that
EW [ ‖∂xo‖22 ] = ∑ x∈x EW [ (∂xo) 2 ]
= ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 2 dp 1 2 = 1 .
Thus, with or without (S), ‖∂xo‖2 is independent of the input-dimension d.
B.4 PROOF OF THEOREM 7
To prove Theorem 7, we will actually prove the following more general theorem, which generalizes Theorem 5. Theorem 7 is a straightforward corollary of it. Theorem 11. Consider any feed-forward network with linear connections and ReLU activation functions that outputs logits fk(x) and satisfies assumptions (H). Suppose that there is a fixed multiset of integers {a1, . . . , an} such that each path from input to output traverses exactly n average pooling nodes with degrees {a1, . . . , an}. Then:
‖∂xfk‖2 ∝ 1∏n
i=1
√ ai . (14)
Furthermore, if the net satisfies the symmetry assumption (S), then: |∂xfk| ∝ 1√ d ∏n i=1 ai .
Two remarks. First, in all this proof, “weight” encompasses both the standard random weights, and the constant (deterministic) weights equal to 1/(in-degree) of the average-poolings. Second, assumption H5 implies that the average-pooling nodes have disjoint input nodes: otherwise, there would be two non-zero deterministic weights w,w′ from a same neuron that would hence satisfy: EW [ww′] 6= 0.
Proof. As previously, let o designate any fixed output-logit fk(x). For any path p, let a be the set of average-pooling nodes of p and let q be the set of remaining nodes. Each path-product ωp satisfies: ωp = ωqωa, where ωa is a same fixed constant. For two distinct paths p,p′, Lemma 10 therefore yields: EW [ ω2p ] = ω2a EW [ ω2q ]
and EW [ωpωp′ ] = 0. Combining this with Lemma 9 and under assumption (S), we get similarly to (13):
EW,σ [ (∂xo) 2 ] = ∑
p,p′∈P(x,o)
ωaωa′ EW [ωq ωq′ ]Eσ [σqσq′ ]
= ∑
p∈P(x,o) n∏ i=1 1 a2i ∏ q∈q̃ EW [ ω2q ] Eσ [ σ2q ]
= n∏ i=1 1
ai︸ ︷︷ ︸ same value
for all p
∑ p∈P(x,o) n∏ i=1 1 ai ∏ q∈q̃ 2 dq 1
2︸ ︷︷ ︸∏ p∈p̃
1 dp︸ ︷︷ ︸
= 1d (Lemma 9)
(15)
= 1
d n∏ i=1 1 ai .
Therefore, |∂xo| = |∂xfk| ∝ 1/ √ d ∏n i=1 ai. Again, note that, even without assumption (S), using (15) and Lemma 8 shows that
EW [ ‖∂xo‖22 ] = ∑ x∈x EW,σ [ (∂xo) 2 ]
(15) = ∑ x∈x n∏ i=1 1 ai ∑ p∈P(x,o) n∏ i=1 1 ai ∏ p∈p̃ 2 dp 1 2
= n∏ i=1 1 ai ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 1
dp︸ ︷︷ ︸ =1 (Lemma 8)
= n∏ i=1 1 ai ,
which proves (14).
C COMPARISON TO THE CROSS-LIPSCHITZ REGULARIZER
In their Theorem 2.1, Hein & Andriushchenko (2017) show that the minimal = ‖δ‖p perturbation to fool the classifier must be bigger than:
min k 6=c fc(x)− fk(x) maxy∈B(x, ) ‖∂xfc(y)− ∂xfk(y)‖q . (16)
They argue that the training procedure typically already tries to maximize fc(x)− fk(x), thus one only needs to additionally ensure that ‖∂xfc(x)− ∂xfk(x)‖q is small. They then introduce what they call a Cross-Lipschitz Regularization, which corresponds to the case p = 2 and involves the gradient differences between all classes:
RxLip := 1
K2 K∑ k,h=1 ‖∂xfh(x)− ∂xfk(x)‖22 (17)
In contrast, using (10), (the square of) our proposed regularizer ‖∂xL‖q from (4) can be rewritten, for p = q = 2 as:
R‖·‖2(f) = K∑
k,h=1
qk(x)qh(x) ( ∂xfc(x)− ∂xfk(x) ) ·
· ( ∂xfc(x)− ∂xfh(x) ) (18)
Although both (17) and (18) consist in K2 terms, corresponding to the K2 cross-interaction between the K classes, the big difference is that while in (17) all classes play exactly the same role, in (18) the summands all refer to the target class c in at least two different ways. First, all gradient differences are always taken with respect to ∂xfc. Second, each summand is weighted by the probabilities qk(x) and qh(x) of the two involved classes, meaning that only the classes with a non-negligible probability get their gradient regularized. This reflects the idea that only points near the margin need a gradient regularization, which incidentally will make the margin sharper.
D PERCEPTION THRESHOLD
To keep the average pixel-wise variation constant across dimensions d, we saw in (3) that the threshold p of an `p-attack should scale like d1/p. We will now see another justification for this scaling. Contrary to the rest of this work, where we use a fixed p for all images x, here we will let p depend on the `2-norm of x. If, as usual, the dataset is normalized such that the pixels have on average variance 1, both approaches are almost equivalent.
Suppose that given an `p-attack norm, we want to choose p such that the signal-to-noise ratio (SNR) ‖x‖2 / ‖δ‖2 of a perturbation δ with `p-norm ≤ p is never greater than a given SNR threshold 1/ . For p = 2 this imposes 2 = ‖x‖2. More generally, studying the inclusion of `p-balls in `2-balls yields p = ‖x‖2 d1/p−1/2 . (19) Note that this gives again p = ∞d1/p. This explains how to adjust the threshold with varying `p-attack norm.
Now, let us see how to adjust the threshold of a given `p-norm when the dimension d varies. Suppose that x is a natural image and that decreasing its dimension means either decreasing its resolution or cropping it. Because the statistics of natural images are approximately resolution and scale invariant (Huang, 2000), in either case the average squared value of the image pixels remains unchanged, which implies that ‖x‖2 scales like √ d. Pasting this back into (19), we again get:
p = ∞ d 1/p .
In particular, ∞ ∝ is a dimension-free number, exactly like in (3) of the main part. Now, why did we choose the SNR as our invariant reference quantity and not anything else? One reason is that it corresponds to a physical power ratio between the image and the perturbation, which we think the human eye is sensible to. Of course, the eye’s sensitivity also depends on the spectral frequency of the signals involved, but we are only interested in orders of magnitude here.
Another point: any image x yields an adversarial perturbation δx, where by constraint ‖x‖2 / ‖δx‖ ≤ 1/ . For `2-attacks, this inequality is actually an equality. But what about other `p-attacks: (on average over x,) how far is the signal-to-noise ratio from its imposed upper bound 1/ ? For p 6∈ {1, 2,∞}, the answer unfortunately depends on the pixel-statistics of the images. But when p is 1 or∞, then the situation is locally the same as for p = 2. Specifically: Lemma 12. Let x be a given input and > 0. Let p be the greatest threshold such that for any δ with ‖δ‖p ≤ p, the SNR ‖x‖2 / ‖δ‖2 is ≤ 1/ . Then p = ‖x‖2 d1/p−1/2. Moreover, for p ∈ {1, 2,∞}, if δx is the p-sized `p-attack that locally maximizes the loss-increase i.e. δx = arg max‖δ‖p≤ p |∂xL · δ|, then:
SNR(x) := ‖x‖2 ‖δx‖2 = 1 and Ex [SNR(x)] = 1 .
Proof. The first paragraph follows from the fact that the greatest `p-ball included in an `2-ball of radius ‖x‖2 has radius ‖x‖2 d1/p−1/2.
The second paragraph is clear for p = 2. For p =∞, it follows from the fact that δx = ∞ sign∂xL which satisfies: ‖δx‖2 = ∞ √ d = ‖x‖2. For p = 1, it is because δx = 1 maxi=1..d |(∂xL)i|,
which satisfies: ‖δx‖2 = 2/ √ d = ‖x‖2.
Intuitively, this means that for p ∈ {1, 2,∞}, the SNR of p-sized `p-attacks on any input x will be exactly equal to its fixed upper limit 1/ . And in particular, the mean SNR over samples x is the same (1/ ) in all three cases.
E VULNERABILITY-DIMENSION DEPENDENCE USING DOWNSIZED IMAGENET IMAGES
We also ran a similar experiment as in Section 4.2, but instead of using upsampled CIFAR10 images, we created a 12-class dataset of approximately 80, 000 3 × 256 × 256-sized RGBimages by merging similar ImageNet-classes, resizing the smallest image-edge to 256 pixels and
center-cropping the result. We then downsized the images to 32, 64, 128 and 256 pixels per edge, and trained, not 1, but 10 CNNs per image-size. We then computed their adversarial vulnerability and average ‖∂xL‖1. This gave us 2 values per trained net, i.e. 2 x 10 values per image-size, which are shown in 6. The lines follow their medians, the errorbars show their 10th and 90th quantiles. The conclusions are identical to Section 4.2: after usual training, the vulnerability and gradient-norms still increase like√
d. Note that, as the gradients get much larger at higher dimensions, the first order approximation in (2) becomes less and less valid, which explains the little inflection of the adversarial vulnerability curve. For smaller -thresholds, we verified that the inflection disappears.
F FIGURES WITH AN `2 PERTURBATION-THRESHOLD AND DEEP-FOOL ATTACKS
Here we plot the same curves as in the main part, but using an `2-attack threshold of size 2 = 0.005 √ d instead of the `∞-threshold and deep-fool attacks (Moosavi-Dezfooli et al., 2016) instead of iterative `∞-ones in Figs. 8 and 9. Note that contrary to `∞-thresholds, `2-thresholds must be rescaled by √ d to stay consistent across dimensions (see Eq.3 and Appendix D). All curves look essentially the same as their counterparts in the main text.
G A VARIANT OF ADVERSARIALLY-AUGMENTED TRAINING
In usual adversarially-augmented training, the adversarial image x+ δ is generated on the fly, but is nevertheless treated as a fixed input of the neural net, which means that the gradient does not get backpropagated through δ. This need not be. As δ is itself a function of x, the gradients could actually also be backpropagated through δ. As it was only a one-line change of our code, we used this opportunity to test this variant of adversarial training (FGSM-variant in Figure 2) and thank Martín Arjovsky for suggesting it. But except for an increased computation time, we found no significant difference compared to usual augmented training. | 1. What is the main contribution of the paper regarding neural network vulnerability?
2. What are the strengths and limitations of the theoretical result proved by the authors?
3. How do the experimental findings support or contradict the main theoretical result?
4. Do you have any concerns about the significance or implications of the research?
5. Are there any minor suggestions for improving the clarity of the paper? | Review | Review
The paper studies how the vulnerability of a neural network model depends on its input dimension. The authors prove that for an *untrained* model, randomly initialized with Xavier initialization, the gradient of the loss wrt the input is essentially independent of the architecture and task. This implies that the major factor affecting the norm of that gradient is the input dimension. They then support their argument by experiments measuring the relation between adversarial vulnerability and gradient norm using various *trained* models (including adversarially regularized ones).
I find the main theoretical result interesting. While this is a known fact for the simple case of linear classifiers, extending it to arbitrarily deep networks is a valuable contribution. The proof crucially relies on properties of the specific initialization scheme to show that the gradient does not change too much during backproparagation through the layers. The most significant limitation of the result (which the authors kindly acknowledge) is that this result only holds at initialization. Hence it cannot distinguish between different training methods or between how different architectures evolve during training. Since the situation in adversarial robustness is much more nuanced, I am skeptical about the significance of such statements.
On the experimental side, the finding that gradient regularization improves adversarial robustness to small epsilon values has been made multiple times in the past (as the authors cite in the related work section). It is worth noting that the epsilon considered is 0.005 in L_inf (1.275/255) which is pretty small. This value corresponds to the "small-epsilon regime" where the behavior of the model is fairly linear around the original inputs and thus defenses such as FGSM-training and gradient regularization are effective.
The authors also perform an interesting experiment where they train models on downsampled ImageNet datasets and find that indeed larger input dimension leads to more vulnerable models.
While I find the results interesting, I do not see clear implications. The fact that the vulnerability of a classifier depends on the L1 norm of the input gradient is already known for any locally linear classifier (i.e. deep models too), and it is fairly clear that the L1 norm will have a dimension dependence. The fact that it does not depend on architecture or task at initialization is interesting but of limited significance in my opinion. Given that the experimental results are also not particularly novel, I recommend rejection.
[UPDATE]: Given the overall discussion and paper updates, I consider the current version of the paper (marginally) crossing the ICLR bar. I update my score from a 5 to a 6.
Minor comments to the authors:
-- I think || x ||_* is more clear than |||x||| for the dual norm.
-- Consider using lambda for the regularization, epsilon is confusing since it is overloaded. |
ICLR | Title
Adversarial Vulnerability of Neural Networks Increases with Input Dimension
Abstract
Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. For most current network architectures, we prove that the `1-norm of these gradients grows as the square root of the input size. These nets therefore become increasingly vulnerable with growing image size. Our proofs rely on the network’s weight distribution at initialization, but extensive experiments confirm that our conclusions still hold after usual training.
1 INTRODUCTION
Following the work of Goodfellow et al. (2015), Convolutional Neural Networks (CNNs) have been found vulnerable to adversarial examples: an adversary can drive the performance of state-of-the art CNNs down to chance level with imperceptible changes of the inputs. A number of studies have tried to address this issue, but only few have stressed that, because adversarial examples are essentially small input changes that create large output variations, they are inherently caused by large gradients of the neural network with respect to its inputs. Of course, this view, which we will focus on here, assumes that the network and loss are differentiable. It has the advantage to yield a large body of specific mathematical tools, but might not be easily extendable to masked gradients, non-smooth models or the 0-1-loss. Nevertheless, our conclusions might even hold for non-smooth models, given that the latter can often be viewed as smooth at a coarser level.
Contributions. More specifically, we provide theoretical and empirical arguments supporting the existence of a monotonic relationship between the gradient norm of the training objective (of a differentiable classifier) and its adversarial vulnerability. Evaluating this norm based on the weight statistics at initialization, we show that CNNs and most feed-forward networks, by design, exhibit increasingly large gradients with input dimension d, almost independently of their architecture. That leaves them increasingly vulnerable to adversarial noise. We corroborate our theoretical results by extensive experiments. Although some of those experiments involve adversarial regularization schemes, our goal is not to advocate a new adversarial defense (these schemes are already known), but to show how their effect can be explained by our first order analysis. We do not claim to explain all aspects of adversarial vulnerability, but we claim that our first order argument suffices to explain a significant part of the empirical findings on adversarial vulnerability. This calls for researching the design of neural network architectures with inherently smaller gradients and provides useful guidelines to practitioners and network designers.
2 FROM ADVERSARIAL EXAMPLES TO LARGE GRADIENTS
Suppose that a given classifier ϕ classifies an image x as being in category ϕ(x). An adversarial image is a small modification of x, barely noticeable to the human eye, that suffices to fool the classifier into predicting a class different from ϕ(x). It is a small perturbation of the inputs, that creates a large variation of outputs. Adversarial examples thus seem inherently related to large gradients of the network. A connection, that we will now clarify. Note that visible adversarial examples sometimes appear in the literature, but we deliberately focus on imperceptible ones.
Adversarial vulnerability and adversarial damage. In practice, an adversarial image is constructed by adding a perturbation δ to the original image x such that ‖δ‖ ≤ for some (small) number and a given norm ‖·‖ over the input space. We call the perturbed input x+ δ an -sized ‖·‖-attack and say that the attack was successful when ϕ(x+ δ) 6= ϕ(x). This motivates Definition 1. Given a distribution P over the input-space, we call adversarial vulnerability of a classifier ϕ to an -sized ‖·‖-attack the probability that there exists a perturbation δ of x such that
‖δ‖ ≤ and ϕ(x) 6= ϕ(x+ δ) . (1)
We call the average increase-after-attack Ex∼P [∆L] of a loss L the (L-) adversarial damage (of the classifier ϕ to an -sized ‖·‖-attack).
When L is the 0-1-loss L0/1, adversarial damage is the accuracy-drop after attack. The 0-1-loss damage is always smaller than adversarial vulnerability, because vulnerability counts all class-changes of ϕ(x), whereas some of them may be neutral to adversarial damage (e.g. a change between two wrong classes). The L0/1-adversarial damage thus lower bounds adversarial vulnerability. Both are even equal when the classifier is perfect (before attack), because then every change of label introduces an error. It is hence tempting to evaluate adversarial vulnerability with L0/1-adversarial damage.
From ∆L0/1 to ∆L and to ∂xL. In practice however, we do not train our classifiers with the non-differentiable 0-1-loss but use a smoother loss L, such as the cross-entropy loss. For similar reasons, we will now investigate the adversarial damage Ex [∆L(x, c)] with loss L rather than L0/1. Like for Goodfellow et al. (2015); Lyu et al. (2015); Sinha et al. (2018) and many others, a classifier ϕ will hence be robust if, on average over x, a small adversarial perturbation δ of x creates only a small variation δL of the loss. Now, if ‖δ‖ ≤ , then a first order Taylor expansion in shows that
δL = max δ : ‖δ‖≤ |L(x+ δ, c)− L(x, c)| ≈ max δ : ‖δ‖≤ |∂xL · δ| = |||∂xL|||, (2)
where ∂xL denotes the gradient of L with respect to x, and where the last equality stems from the definition of the dual norm |||·||| of ‖·‖. Now two remarks. First: the dual norm only kicks in because we let the input noise δ optimally adjust to the coordinates of ∂xL within its -constraint. This is the brand mark of adversarial noise: the different coordinates add up, instead of statistically canceling each other out as they would with random noise. For example, if we impose that ‖δ‖2 ≤ , then δ will strictly align with ∂xL. If instead ‖δ‖∞ ≤ , then δ will align with the sign of the coordinates of ∂xL. Second remark: while the Taylor expansion in (2) becomes exact for infinitesimal perturbations, for finite ones it may actually be dominated by higher-order terms. Our experiments (Figures 1 & 2) however strongly suggest that in practice the first order term dominates the others. Now, remembering that the dual norm of an `p-norm is the corresponding `q-norm, and summarizing, we have proven Lemma 2. At first order approximation in , an -sized adversarial attack generated with norm ‖·‖ increases the loss L at point x by |||∂xL|||, where |||·||| is the dual norm of ‖·‖. In particular, an -sized `p-attack increases the loss by ‖∂xL‖q where 1 ≤ p ≤ ∞ and 1p + 1q = 1.
Consequently, the adversarial damage of a classifier with lossL to -sized attacks generated with norm ‖·‖ is Ex|||∂xL|||. This is valid only at first order, but it proves that at least this kind of first-order vulnerability is present. We will see that the first-order predictions closely match the experiments, and that this insight helps protecting even against iterative (non-first-order) attack methods (Figure 1).
Calibrating the threshold to the attack-norm ‖·‖. Lemma 2 shows that adversarial vulnerability depends on three main factors: (i) ‖·‖ , the norm chosen for the attack (ii) , the size of the attack, and (iii) Ex|||∂xL||| , the expected dual norm of ∂xL. We could see Point (i) as a measure of our sensibility to image perturbations, (ii) as our sensibility threshold, and (iii) as the classifier’s expected marginal sensibility to a unit perturbation. Ex|||∂xL||| hence intuitively captures the discrepancy between our perception (as modeled by ‖·‖) and the classifier’s perception for an input-perturbation of small size . Of course, this viewpoint supposes that we actually found a norm ‖·‖ (or more generally a metric) that faithfully reflects human perception – a project in its own right, far beyond the scope of this paper. However, it is clear that the threshold that we choose should depend on the norm ‖·‖ and hence on the input-dimension d. In particular, for a given pixel-wise order of magnitude of the perturbations δ, the `p-norm of the perturbation will scale like d1/p. This suggests to write the threshold p used with `p-attacks as:
p = ∞ d 1/p , (3)
where ∞ denotes a dimension-independent constant. In Appendix D we show that this scaling also preserves the average signal-to-noise ratio ‖x‖2 / ‖δ‖2, both across norms and dimensions, so that p could correspond to a constant human perception-threshold. With this in mind, the impatient reader may already jump to Section 3, which contains our main contributions: the estimation of Ex‖∂xL‖q for standard feed-forward nets. Meanwhile, the rest of this section shortly discusses two straightforward defenses that we will use later and that further illustrate the role of gradients.
A new old regularizer. Lemma 2 shows that the loss of the network after an 2 -sized ‖·‖-attack is
L ,|||·|||(x, c) := L(x, c) + 2 |||∂xL||| . (4)
It is thus natural to take this loss-after-attack as a new training objective. Here we introduced a factor 2 for reasons that will become clear in a moment. Incidentally, for ‖·‖ = ‖·‖2, this new loss reduces to an old regularization-scheme proposed by Drucker & LeCun (1991) called double-backpropagation. At the time, the authors argued that slightly decreasing a function’s or a classifier’s sensitivity to input perturbations should improve generalization. In a sense, this is exactly our motivation when defending against adversarial examples. It is thus not surprising to end up with the same regularization term. Note that our reasoning only shows that training with one specific norm |||·||| in (4) helps to protect against adversarial examples generated from ‖·‖. A priori, we do not know what will happen for attacks generated with other norms; but our experiments suggest that training with one norm also protects against other attacks (see Figure 2 and Section 4.1).
Link to adversarially-augmented training. In (1), designates an attack-size threshold, while in (4), it is a regularization-strength. Rather than a notation conflict, this reflects an intrinsic duality between two complementary interpretations of , which we now investigate further. Suppose that, instead of using the loss-after-attack, we augment our training set with -sized ‖·‖-attacks x + δ, where for each training point x, the perturbation δ is generated on the fly to locally maximize the loss-increase. Then we are effectively training with
L̃ ,‖·‖(x, c) := 1
2 (L(x, c) + L(x+ δ, c)) , (5)
where by construction δ satisfies (2). We will refer to this technique as adversarially augmented training. It was first introduced by Goodfellow et al. (2015) with ‖·‖ = ‖·‖∞ under the name of FGSM1-augmented training. Using the first order Taylor expansion in of (2), this ‘old-plus-postattack’ loss of (5) simply reduces to our loss-after-attack, which proves
Proposition 3. Up to first-order approximations in , L̃ ,‖·‖ = L ,|||·||| . Said differently, for small enough , adversarially-augmented training with -sized ‖·‖-attacks amounts to penalizing the dual norm |||·||| of ∂xL with weight /2. In particular, double-backpropagation corresponds to training with `2-attacks, while FGSM-augmented training corresponds to an `1-penalty on ∂xL.
This correspondence between training with perturbations and using a regularizer can be compared to Tikhonov regularization: Tikhonov regularization amounts to training with random noise Bishop (1995), while training with adversarial noise amounts to penalizing ∂xL. Section 4.1 verifies the correspondence between adversarial augmentation and gradient regularization empirically, which also strongly suggests the empirical validity of the first-order Taylor expansion in (2).
3 ESTIMATING ‖∂xL‖q TO EVALUATE ADVERSARIAL VULNERABILITY
In this section, we evaluate the size of ‖∂xL‖q for standard neural network architectures. We start with fully-connected networks, and finish with a much more general theorem that, not only encompasses CNNs (with or without strided convolutions), but also shows that the gradient-norms are essentially independent of the network topology. We start our analysis by showing how changing q affects the size of ‖∂xL‖q . Suppose for a moment that the coordinates of ∂xL have typical magnitude |∂xL|. Then ‖∂xL‖q scales like d1/q|∂xL|. Consequently
p ‖∂xL‖q ∝ p d1/q |∂xL| ∝ d |∂xL| . (6) 1FGSM = Fast Gradient Sign Method
This equation carries two important messages. First, we see how ‖∂xL‖q depends on d and q. The dependence seems highest for q = 1. But once we account for the varying perceptibility threshold p ∝ d1/p, we see that adversarial vulnerability scales like d · |∂xL|, whatever `p-norm we use. Second, (6) shows that to be robust against any type of `p-attack at any input-dimension d, the average absolute value of the coefficients of ∂xL must grow slower than 1/d. Now, here is the catch, which brings us to our core insight.
3.1 CORE IDEA: ONE NEURON WITH MANY INPUTS
In order to preserve the activation variance of the neurons from layer to layer, the neural weights are usually initialized with a variance that is inversely proportional to the number of inputs per neuron. Imagine for a moment that the network consisted only of one output neuron o linearly connected to all input pixels. For the purpose of this example, we assimilate o and L. Because we initialize the weights with a variance of 1/d, their average absolute value |∂xo| ≡ |∂xL| grows like 1/ √ d, rather than the required 1/d. By (6), the adversarial vulnerability ‖∂xo‖q ≡ ‖∂xL‖q therefore increases like d/ √ d = √ d.
This toy example shows that the standard initialization scheme, which preserves the variance from layer to layer, causes the average coordinate-size |∂xL| to grow like 1/ √ d instead of 1/d. When an `∞-attack tweaks its -sized input-perturbations to align with the coordinate-signs of ∂xL, all coordinates of ∂xL add up in absolute value, resulting in an output-perturbation that scales like √ d and leaves the network increasingly vulnerable with growing input-dimension.
3.2 GENERALIZATION TO DEEP NETWORKS
Our next theorems generalize the previous toy example to a very wide class of feedforward nets with ReLU activation functions. For illustration purposes, we start with fully connected nets and only then proceed to the broader class, which includes any succession of (possibly strided) convolutional layers. In essence, the proofs iterate our insight on one layer over a sequence of layers. They all rely on the following set (H) of hypotheses:
H1 Non-input neurons are followed by a ReLU killing half of its inputs, independently of the weights. H2 Neurons are partitioned into layers, meaning groups that each path traverses at most once. H3 All weights have 0 expectation and variance 2/(in-degree) (‘He-initialization’). H4 The weights from different layers are independent. H5 Two distinct weights w,w′ from a same node satisfy E [ww′] = 0.
If we follow common practice and initialize our nets as proposed by He et al. (2015), then H3-H5 are satisfied at initialization by design, while H1 is usually a very good approximation (Balduzzi et al., 2017). Note that such i.i.d. weight assumptions have been widely used to analyze neural nets and are at the heart of very influential and successful prior work (e.g., equivalence between neural nets and Gaussian processes as pioneered by Neal 1996). Nevertheless, they do not hold after training. That is why all our statements in this section are to be understood as orders of magnitudes that are very well satisfied at initialization in theory and in practice, and that we will confirm experimentally after training in Section 4. Said differently, while our theorems rely on the statistics of neural nets at initialization, our experiments confirm their conclusions after training.
Theorem 4 (Vulnerability of Fully Connected Nets). Consider a succession of fully connected layers with ReLU activations which takes inputs x of dimension d, satisfies assumptions (H), and outputs logits fk(x) that get fed to a final cross-entropy-loss layer L. Then the coordinates of ∂xfk grow like 1/ √ d, and
‖∂xL‖q ∝ d 1 q− 1 2 and p ‖∂xL‖q ∝
√ d . (7)
These networks are thus increasingly vulnerable to `p-attacks with growing input-dimension.
Theorem 4 is a special case of the next theorem, which will show that the previous conclusions are essentially independent of the network-topology. We will use the following symmetry assumption on the neural connections. For a given path p, let the path-degree dp be the multiset of encountered in-degrees along path p. For a fully connected network, this is the unordered sequence of layer-sizes
preceding the last path-node, including the input-layer. Now consider the multiset {dp}p∈P(x,o) of all path-degrees when p varies among all paths from input x to output o. The symmetry assumption (relatively to o) is
(S) All input nodes x have the same multiset {dp}p∈P(x,o) of path-degrees from x to o. Intuitively, this means that the statistics of degrees encountered along paths to the output are the same for all input nodes. This symmetry assumption is exactly satisfied by fully connected nets, almost satisfied by CNNs (up to boundary effects, which can be alleviated via periodic or mirror padding) and exactly satisfied by strided layers, if the layer-size is a multiple of the stride.
Theorem 5 (Vulnerability of Feedforward Nets). Consider any feed-forward network with linear connections and ReLU activation functions. Assume the net satisfies assumptions (H) and outputs logits fk(x) that get fed to the cross-entropy-loss L. Then ‖∂xfk‖2 is independent of the input dimension d and 2 ‖∂xL‖2 ∝ √ d. Moreover, if the net satisfies the symmetry assumption (S), then
|∂xfk| ∝ 1/ √ d and (7) still holds: ‖∂xL‖q ∝ d 1 q− 1 2 and p ‖∂xL‖q ∝ √ d.
Theorems 4 and 5 are proven in Appendix B. The main proof idea is that in the gradient norm computation, the He-initialization exactly compensates the combinatorics of the number of paths in the network, so that this norm becomes independent of the network topology. In particular, we get
Corollary 6 (Vulnerability of CNNs). In any succession of convolution and dense layers, strided or not, with ReLU activations, that satisfies assumptions (H) and outputs logits that get fed to the cross-entropy-loss L, the gradient of the logit-coordinates scale like 1/ √ d and (7) is satisfied. It is hence increasingly vulnerable with growing input-resolution to attacks generated with any `p-norm.
Appendix A shows that the network gradient are dampened when replacing strided layers by average poolings, essentially because average-pooling weights do not follow the He-init assumption H3.
4 EMPIRICAL RESULTS
In Section 4.1, we empirically verify the validity of the first-order Taylor approximation made in (2) (Fig.1), for example by checking the correspondence between loss-gradient regularization and adversarially-augmented training (Fig.2). Section 4.2 then empirically verifies that both the average `1-norm of ∂xL and the adversarial vulnerability grow like √ d as predicted by Corollary 6. For all experiments, we approximate adversarial vulnerability using various attacks of the Foolboxpackage (Rauber et al., 2017). We use an `∞ attack-threshold of size ∞ = 0.005 (and later 0.002) which, for pixel-values ranging from 0 to 1, is completely imperceptible but suffices to fool the classifiers on a significant proportion of examples. This ∞-threshold should not be confused with the regularization-strengths appearing in (4) and (5), which will be varied in some experiments.
4.1 FIRST-ORDER APPROXIMATION, GRADIENT PENALTY, ADVERSARIAL AUGMENTATION
We train several CNNs with same architecture to classify CIFAR-10 images (Krizhevsky, 2009). For each net, we use a specific training method with a specific regularization value . The training methods used were `1- and `2-penalization of ∂xL (Eq. 4), adversarial augmentation with `∞- and `2- attacks (Eq. 5), projected gradient descent (PGD) with randomized starts (7 steps per attack with step-size = .2 ∞; see Madry et al. 2018) and the cross-Lipschitz regularizer (Eq. 17 in Appendix C). We then test the adversarial vulnerability of each trained network using the following attack-methods: single-step `∞- (FGSM) and `2-attacks, iterative `∞- (PGD) and `2-attacks, and DeepFool attacks (Moosavi-Dezfooli et al., 2016). All networks have 6 ‘strided convolution→ batchnorm→ ReLU’ layers with strides [1, 2, 2, 2, 2, 2] respectively and 64 output-channels each, followed by a final fully-connected linear layer. Results are summarized in Figures 1 and 2. Figure 1 fixes the training method – gradient `1-regularization – and plots the obtained adversarial vulnerabilities for various attacks types. Figure 2 fixes the attack type – iterative `∞-attacks – but plots the curves obtained for various training methods. Note that our goal here is not to advocate one defense over another, but rather to check the validity of the Taylor expansion, and empirically verify that first order terms (i.e., gradients) suffice to explain much of the observed adversarial vulnerability. Similarly, our goal in testing several attacks (Figure 1) is not to present a specifically strong one, but rather to verify that for all attacks, the trends are the same: the vulnerability grows with increasing gradients.
Validity of first order expansion. The following observations support the validity of the first order Taylor expansion in (2) and suggest that it is a crucial component of adversarial vulnerability: (i) the efficiency of the first-order defense against iterative (non-first-order) attacks (Fig.1a); (ii) the striking similarity between the PGD curves (adversarial augmentation with iterative attacks) and the other adversarial training training curves (one-step attacks/defenses); (iii) the functional-like dependence between any approximation of adversarial vulnerability and Ex‖∂xL‖1 (Fig.1b), and its independence on the training method (Fig.2d). (iv) the excellent correspondence between the gradient-regularization and adversarial training curves (see next paragraph). Said differently, adversarial examples seem indeed to be primarily caused by large gradients of the classifier as captured via the induced loss. 2
2On Figure 1, the two `∞-attacks seem more efficient than the others, because we chose an `∞ perturbation threshold ( ∞). With an `2-threshold it is the opposite (see Figure 7, Appendix F).
Illustration of Proposition 3. The upper row of Figure 2 plots Ex‖∂xL1‖, adversarial vulnerability and accuracy as a function of d1/p. The excellent match between the adversarial augmentation curve with p = ∞ (p = 2) and its gradient-regularization dual counterpart with q = 1 (resp. q = 2) illustrates the duality between as a threshold for adversarially-augmented training and as a regularization constant in the regularized loss (Proposition 3). It also supports the validity of the first-order Taylor expansion in (2).
Confirmation of (3). Still on the upper row, the curves for p =∞, q = 1 have no reason to match those for p = q = 2 when plotted against , because -threshold is relative to a specific attack-norm. However, (3) suggested that the rescaled thresholds d1/p may approximately correspond to a same ‘threshold-unit’ across `p-norms and across dimension. This is well confirmed by the upper row plots: by rescaling the x-axis, the p = q = 2 and q = 1, p =∞ curves get almost super-imposed. Accuracy-vs-Vulnerability Trade-Off. Merging Figures 2b and 2c by taking out , Figure 2f shows that all gradient regularization and adversarial training methods yield equivalent accuracyvulnerability trade-offs. Incidentally, for higher penalization values, these trade-offs appear to be much better than those given by cross Lipschitz regularization.
The penalty-norm does not matter. We were surprised to see that on Figures 2d and 2f, the L ,q curves are almost identical for q = 1 and 2. This indicates that both norms can be used interchangeably in (4) (modulo proper rescaling of via (3)), and suggests that protecting against a specific attacknorm also protects against others. (6) may provide an explanation: if the coordinates of ∂xL behave like centered, uncorrelated variables with equal variance –which follows from assumptions (H) –, then the `1- and `2-norms of ∂xL are simply proportional. Plotting Ex‖∂xL(x)‖2 against Ex‖∂xL(x)‖1 in Figure 2e confirms this explanation. The slope is independent of the training method. Therefore, penalizing ‖∂xL(x)‖1 during training will not only decrease Ex‖∂xL‖1 (as shown in Figure 2a), but also drive down Ex‖∂xL‖2 and vice-versa.
4.2 VULNERABILITY GROWS WITH INPUT RESOLUTION
Theorems 4-5 and Corollary 6 predict a linear growth of the average `1-norm of ∂xL with the square root of the input dimension d, and therefore also of adversarial vulnerability (Lemma 2). To test these predictions, we upsampled the CIFAR-10 images (of size 3 x 32 x 32) by copying pixels so as to get 4 datasets with, respectively, 32, 64, 128 and 256 pixels per edge. We then trained a CNN on each dataset
and computed their adversarial vulnerability (with iterative `∞-attacks, threshold ∞ = .002) and average ‖∂xL‖1 over the last 20 epochs on the same held-out test-dataset. This gave us 2 x 20-values per net and imagesize, summarized in Figure 3. The dashed-lines follow their medians and the errorbars show their 10th and 90th quantiles. As predicted by our theorems, both ‖∂xL‖1 and adversarial vulnerability grow approximately linearly with √ d. We also ran a similar experiment on downsized ImageNet images, where we train several identical nets per image-size rather than just one. Conclusions are unchanged. See Appendix E.
All networks had exactly the same amount of parameters and very similar structure across the various input-resolutions. The CNNs were a succession of 8 ‘convolution→ batchnorm→ ReLU’ layers with 64 output channels, followed by a final full-connection to the 12 logit-outputs. We used 2× 2- max-poolings after the convolutions of layers 2,4, 6 and 8, and a final max-pooling after layer 8 that fed only 1 neuron per channel to the fully-connected layer. To ensure that the convolution-kernels cover similar ranges of the images across each of the 32, 64, 128 and 256 input-resolutions, we respectively dilated all convolutions (‘à trous’) by a factor 1, 2, 4 and 8.
5 DISCUSSIONS
5.1 IMPLICATIONS: WHY PRIOR VULNERABILITY MAY MATTER
Our theoretical results show that the priors of classical neural networks yield vulnerable functions because of naturally high gradients. And our experiments (Fig 3&6) suggest that usual training does not escape these prior properties. But how may these insights help understanding the vulnerability of robustly trained networks? Clearly, to be successful, robust training algorithms must escape ill-behaved priors, which explains why most methods (e.g. FGSM, PGD) are essentially gradient penalization techniques. But, MNIST aside, even state-of-the-art methods largely fail at protecting current network architectures (Madry et al., 2018), and understanding why is motivation to this and many other papers. Interestingly, Schmidt et al. (2018) recently noticed that those methods actually do protect the nets on training examples, but fail to generalize to the test set. They hence conclude that state-of-the-art robustification algorithms work, but need more data. Alternatively however, when generalization fails, one can also reduce the model’s complexity. Large fully connected nets for example typically fail to generalize to out-of-sample examples: getting similar accuracies than CNNs would need prohibitively many training points. Similarly, Schmidt et al.’s observations may suggest that, outside the training points, networks tend to recover their prior properties, i.e. naturally large gradients. Figure 4 corroborates this hypothesis. It plots the evolution over training epochs of the `1-gradient-norms of the CNNs from Section 4.2 (Fig 3) on the training and test sets respectively. The discrepancy is unmistakable: after a brief initialization phase, the norms decrease on the training set, but increase on the test set. They are moreover almost input-dimension independent on the training set, but scale as √ d on the test set (as seen in Fig 3) up to respectively 2, 4, 8 and 16 times the training set values. These observations suggest that, with the current amount of data, tackling adversarial vulnerability may require new architectures with inherently smaller gradients. Searching these architectures among those with well-behaved prior-gradients seems a reasonable start, where our theoretical results may prove very useful.3
5.2 RELATED LITERATURE
On network vulnerability. Goodfellow et al. (2015) already stressed that adversarial vulnerability increases with growing dimension d. But their argument only relied on a linear ‘one-output-to-manyinputs’-model with dimension-independent weights. They therefore concluded on a linear growth of adversarial vulnerability with d. In contrast, our theory applies to almost any standard feed-forward architecture (not just linear), and shows that, once we adjust for the weight’s dimension-dependence, adversarial vulnerability increases like √ d (not d), almost independently of the architecture. Nevertheless, our experiments confirm Goodfellow et al.’s idea that our networks are “too linear-like”, in the sense that a first-order Taylor expansion is indeed sufficient to explain the adversarial vulnerability of neural networks. As suggested by the one-output-to-many-inputs model, the culprit is that growing
3Appendix A investigates such a preliminary direction by introducing average poolings, which have a weight-size 1/in−channels rather than the typical 1/√in−channels of the other He-initialized weights.
dimensionality gives the adversary more and more room to ‘wriggle around’ with the noise and adjust to the gradient of the output neuron. This wriggling, we show, is still possible when the output is connected to all inputs only indirectly, even when no neuron is directly connected to all inputs, like in CNNs. This explanation of adversarial vulnerability is independent of the intrinsic dimensionality or geometry of the data (compare to Amsaleg et al. 2017; Gilmer et al. 2018). Finally, let us mention that Fawzi et al. (2016) show a close link between the vulnerability to small worst-case perturbation (as studied here) and larger average perturbations. Our findings on the adversarial vulnerability NNs to small perturbation could thus be translated accordingly.
On robustification algorithms. Incidentally, Goodfellow et al. (2015) also already relate adversarial vulnerability to large gradients of the loss L, an insight at the very heart of their FGSM-algorithm. They however do not propose any explicit penalizer on the gradient of L other than indirectly through adversarially-augmented training. Conversely, Ross & Doshi-Velez (2018) propose the old double-backpropagation to robustify networks but make no connection to FGSM and adversarial augmentation. Lyu et al. (2015) discuss and use the connection between gradient-penalties and adversarial augmentation, but never actually compare both in experiments. This comparison however is essential to test the validity of the first-order Taylor expansion in (2), as confirmed by the similarity between the gradient-regularization and adversarial-augmentation curves in Figure 2. Hein & Andriushchenko (2017) derived yet another gradient-based penalty –the cross-Lipschitz-penalty– by considering (and proving) formal guarantees on adversarial vulnerability itself, rather than adversarial damage. While both penalties are similar in spirit, focusing on the adversarial damage rather than vulnerability has two main advantages. First, it achieves better accuracy-to-vulnerability ratios, both in theory and practice, because it ignores class-switches between misclassified examples and penalizes only those that reduce the accuracy. Second, it allows to deal with one number only, ∆L, whereas Hein & Andriushchenko’s cross-Lipschitz regularizer and theoretical guarantees explicitly involve all K logit-functions (and their gradients). See Appendix C. Penalizing network-gradients is also at the heart of contractive auto-encoders as proposed by Rifai et al. (2011), where it is used to regularize the encoder-features. Seeing adversarial training as a generalization method, let us also mention Hochreiter & Schmidhuber (1995), who propose to enhance generalization by searching for parameters in a “flat minimum region” of the loss. This leads to a penalty involving the gradient of the loss, but taken with respect to the weights, rather than the inputs. In the same vein, a gradientregularization of the loss of generative models also appears in Proposition 6 of Ollivier (2014), where it stems from a code-length bound on the data (minimum description length). More generally, the gradient regularized objective (4) is essentially the first-order approximation of the robust training objective max‖δ‖≤ L(x+ δ, c) which has a long history in math (Wald, 1945), machine learning (Xu et al., 2009) and now adversarial vulnerability (Sinha et al., 2018). Finally, Cisse et al. (2017) propose new network-architectures that have small gradients by design, rather than by special training: an approach that makes all the more sense, considering the conclusion of Theorems 4 and 5. For further details and references on adversarial attacks and defenses, we refer to Yuan et al. (2017).
6 CONCLUSION
For differentiable classifiers and losses, we showed that adversarial vulnerability increases with the gradients ∂xL of the loss, which is confirmed by the near-perfect functional relationship between gradient norms and vulnerability (Figures 1&2d). We then evaluated the size of ‖∂xL‖q and showed that, at initialization, usual feed-forward nets (convolutional or fully connected) are increasingly vulnerable to `p-attacks with growing input dimension d (the image-size), almost independently of their architecture. Our experiments show that, on the tested architectures, usual training escapes those prior gradient (and vulnerability) properties on the training, but not on the test set. Schmidt et al. (2018) suggest that alleviating this generalization gap requires more data. But a natural (complementary) alternative would be to search for architectures with naturally smaller gradients, and in particular, with well-behaved priors. Despite all their limitations (being only first-order, assuming a prior weight-distribution and a differentiable loss and architecture), our theoretical insights may thereby still prove to be precious future allies.
A EFFECTS OF STRIDED AND AVERAGE-POOLING LAYERS ON ADVERSARIAL VULNERABILITY
It is common practice in CNNs to use average-pooling layers or strided convolutions to progressively decrease the number of pixels per channel. Corollary 6 shows that using strided convolutions does not protect against adversarial examples. However, what if we replace strided convolutions by convolutions with stride 1 plus an average-pooling layer? Theorem 5 considers only randomly initialized weights with typical size 1/ √ in-degree. Average-poolings however introduce deterministic weights of size 1/(in-degree). These are smaller and may therefore dampen the input-to-output gradients and protect against adversarial examples. We confirm this in our next theorem, which uses a slightly modified version (H′) of (H) to allow average pooling layers. (H′) is (H), but where the He-init H3 applies to all weights except the (deterministic) average pooling weights, and where H1 places a ReLU on every non-input and non-average-pooling neuron.
Theorem 7 (Effect of Average-Poolings). Consider a succession of convolution layers, dense layers and n average-pooling layers, in any order, that satisfies (H′) and outputs logits fk(x). Assume the n average pooling layers have a stride equal to their mask size and perform averages over a1, ..., an nodes respectively. Then ‖∂xfk‖2 and |∂xfk| scale like 1/ √ a1 · · · an and 1/ √ d a1 · · · an respectively.
Proof in Appendix B.4. Theorem 7 suggest to try and replace any strided convolution by its non-strided counterpart, followed by an average-pooling layer. It also shows that if we systematically reduce the number of pixels per channel down to 1 by using only non-strided convolutions and average-pooling layers (i.e. d = ∏n i=1 ai), then all input-to-output gradients should become independent of d, thereby making the network completely robust to adversarial examples.
Our following experiments (Figure 5) show that after training, the networks get indeed robustified to adversarial examples, but remain more vulnerable than suggested by Theorem 7.
Experimental setup. Theorem 7 shows that, contrary to strided layers, average-poolings should decrease adversarial vulnerability. We tested this hypothesis on CNNs trained on CIFAR-10, with 6 blocks of ‘convolution → BatchNorm→ReLU’ with 64 output-channels, followed by a final average pooling feeding one neuron per channel to the last fully-connected linear layer. Additionally, after every second convolution, we placed a pooling layer with stride and mask-size (2, 2) (thus acting on 2× 2
neurons at a time, without overlap). We tested average-pooling, strided and max-pooling layers and trained 20 networks per architecture. Results are shown in Figure 5. All accuracies are very close, but, as predicted, the networks with average pooling layers are more robust to adversarial images than the others. However, they remain more vulnerable than what would follow from Theorem 7. We also noticed that, contrary to the strided architectures, their gradients after training are an order of magnitude higher than at initialization and than predicted. This suggests that assumptions (H) get more violated when using average-poolings instead of strided layers. Understanding why will need further investigations.
B PROOFS
B.1 PROOF OF PROPOSITION 3
Proof. Let δ be an adversarial perturbation with ‖δ‖ = 1 that locally maximizes the loss increase at point x, meaning that δ = arg max‖δ′‖≤1∂xL · δ′. Then, by definition of the dual norm of ∂xL we have: ∂xL · ( δ) = |||∂xL|||. Thus
L̃ ,‖·‖(x, c) = 1 2 (L(x, c) + L(x+ δ, c)) = 1 2 (2L(x, c) + |∂xL · δ|+ o(‖δ‖)) =
= L(x, c) + 2 |||∂xL|||+ o( ) = L ,|||·|||(x, c) + o( ) .
B.2 PROOF OF THEOREM 4
Proof. Let x designate a generic coordinate of x. To evaluate the size of ‖∂xL‖q, we will evaluate the size of the coordinates ∂xL of ∂xL by decomposing them into
∂xL = K∑ k=1 ∂L ∂fk ∂fk ∂x =: K∑ k=1 ∂kL ∂xfk,
where fk(x) denotes the logit-probability of x belonging to class k. We now investigate the statistical properties of the logit gradients ∂xfk, and then see how they shape ∂xL.
Step 1: Statistical properties of ∂xfk. Let P(x, k) be the set of paths p from input neuron x to output-logit k. Let p− 1 and p be two successive neurons on path p, and p̃ be the same path p but without its input neuron. Let wp designate the weight from p− 1 to p and ωp be the path-product ωp := ∏ p∈p̃ wp. Finally, let σp (resp. σp) be equal to 1 if the ReLU of node p (resp. if path p) is active for input x, and 0 otherwise.
As previously noticed by Balduzzi et al. (2017) using the chain rule, we see that ∂xfk is the sum of all ωp whose path is active, i.e. ∂xfk(x) = ∑ p∈P(x,k) ωpσp. Consequently:
EW,σ [ ∂xfk(x) 2 ] = ∑
p∈P(x,k) ∏ p∈p̃ EW [ w2p ] Eσ [ σ2p ]
= |P(x, k)| ∏ p∈p̃ 2 dp−1 1 2 = ∏ p∈p̃ dp · ∏ p∈p̃ 1 dp−1 = 1 d . (8)
The first equality uses H1 to decouple the expectations over weights and ReLUs, and then applies Lemma 10 of Appendix B.3, which uses H3-H5 to kill all cross-terms and take the expectation over weights inside the product. The second equality uses H3 and the fact that the resulting product is the same for all active paths. The third equality counts the number of paths from x to k and we conclude by noting that all terms cancel out, except dp−1 from the input layer which is d. Equation 8 shows that |∂xfk| ∝ 1/ √ d.
Step 2: Statistical properties of ∂kL and ∂xL. Defining qk(x) := e fk(x)∑K
h=1 e fh(x) (the probability of image x belonging to class k according to the network), we have, by definition of the cross-entropy loss, L(x, c) := − log qc(x), where c is the label of the target class. Thus:
∂kL(x) = { −qk(x) if k 6= c 1− qc(x) otherwise, and
∂xL(x) = (1− qc) ∂xfc(x) + ∑ k 6=c qk (−∂xfk(x)). (9)
Using again Lemma 10, we see that the ∂xfk(x) are K centered and uncorrelated variables. So ∂xL(x) is approximately the sum of K uncorrelated variables with zero-mean, and its total variance is given by ( (1 − qc)2 + ∑ k 6=c q 2 k ) /d. Hence the magnitude of ∂xL(x) is 1/ √ d for all x, so the `q-norm of the full input gradient is d1/q−1/2. (6) concludes.
Remark 1. Equation 9 can be rewritten as
∂xL(x) = K∑ k=1 qk(x) ( ∂xfc(x)− ∂xfk(x) ) . (10)
As the term k = c disappears, the norm of the gradients ∂xL(x) appears to be controlled by the total error probability. This suggests that, even without regularization, trying to decrease the ordinary
classification error is still a valid strategy against adversarial examples. It reflects the fact that when increasing the classification margin, larger gradients of the classifier’s logits are needed to push images from one side of the classification boundary to the other. This is confirmed by Theorem 2.1 of Hein & Andriushchenko (2017). See also (16) in Appendix C.
B.3 PROOF OF THEOREM 5
The proof of Theorem 5 is very similar to the one of Theorem 4, but we will need to first generalize the equalities appearing in (8). To do so, we identify the computational graph of a neural network to an abstract Directed Acyclic Graph (DAG) which we use to prove the needed algebraic equalities. We then concentrate on the statistical weight-interactions implied by assumption (H), and finally throw these results together to prove the theorem. In all the proof, o will designate one of the output-logits fk(x).
Lemma 8. Let x be the vector of inputs to a given DAG, o be any leaf-node of the DAG, x a generic coordinate of x. Let p be a path from the set of paths P(x, o) from x to o, p̃ the same path without node x, p a generic node in p̃, and dp be its input-degree. Then:∑
x∈x ∑ p̃∈P(x,o) ∏ p∈p̃ 1 dp = 1 (11)
Proof. We will reason on a random walk starting at o and going up the DAG by choosing any incoming node with equal probability. The DAG being finite, this walk will end up at an input-node x with probability 1. Each path p is taken with probability ∏ p∈p̃ 1 dp
. And the probability to end up at an input-node is the sum of all these probabilities, i.e. ∑ x∈x ∑ p∈P(x,o) ∏ p∈p d −1 p , which concludes.
The sum over all inputs x in (11) being 1, on average it is 1/d for each x, where d is the total number of inputs (i.e. the length of x). It becomes an equality under assumption (S): Lemma 9. Under the symmetry assumption (S), and with the previous notations, for any input x ∈ x: ∑
p∈P(x,o) ∏ p∈p̃ 1 dp = 1 d . (12)
Proof. Let us denote D(x, o) := {dp}x∈P(x,o). Each path p in P(x, o) corresponds to exactly one element dp in D(x, o) and vice-versa. And the elements dp of dp completely determine the product∏ p∈p̃ d −1 p . By using (11) and the fact that, by (S), the multiset D(x, o) is independent of x, we hence conclude ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 1 dp = ∑ x∈x ∑ dp∈D(x,o) ∏ dp∈dp 1 dp
= d ∑
dp∈D(x,o) ∏ dp∈dp 1 dp = 1 .
Now, let us relate these considerations on graphs to gradients and use assumptions (H). We remind that path-product ωp is the product ∏ p∈p̃ wp.
Lemma 10. Under assumptions (H), the path-products ωp, ωp′ of two distinct paths p and p′ starting from a same input node x, satisfy:
EW [ωp ωp′ ] = 0 and EW [ ω2p ] = ∏ p∈p̃ EW [ w2p ] .
Furthermore, if there is at least one non-average-pooling weight on path p, then EW [ωp] = 0.
Proof. Hypothesis H4 yields
EW [ ω2p ] = EW ∏ p∈p̃ w2p = ∏ p∈p̃ EW [ w2p ] .
Now, take two different paths p and p′ that start at a same node x. Starting from x, consider the first node after which p and p′ part and call p and p′ the next nodes on p and p′ respectively. Then the weights wp and wp′ are two weights of a same node. Applying H4 and H5 hence gives
EW [ωp ωp′ ] = EW [ ωp\p ωp′\p′ ] EW [wp wp′ ] = 0 .
Finally, if p has at least one non-average-pooling node p, then successively applying H4 and H3 yields: EW [ωp] = EW [ ωp\p ] EW [wp] = 0.
We now have all elements to prove Theorem 5.
Proof. (of Theorem 5) For a given neuron p in p̃, let p − 1 designate the previous node in p of p. Let σp (resp. σp) be a variable equal to 0 if neuron p gets killed by its ReLU (resp. path p is inactive), and 1 otherwise. Then:
∂xo = ∑
p∈P(x,o) ∏ p∈p̃ ∂p−1 p = ∑ p∈P(x,o) ωp σp
Consequently:
EW,σ [ (∂xo) 2 ] = ∑
p,p′∈P(x,o)
EW [ωp ωp′ ]Eσ [σpσp′ ]
= ∑
p∈P(x,o) ∏ p∈p̃ EW [ ω2p ] Eσ [ σ2p ]
(13)
= ∑
p∈P(x,o) ∏ p∈p̃ 2 dp 1 2 = 1 d ,
where the firs line uses the independence between the ReLU killings and the weights (H1), the second uses Lemma 10 and the last uses Lemma 9. The gradient ∂xo thus has coordinates whose squared expectations scale like 1/d. Thus each coordinate scales like 1/ √ d and ‖∂xo‖q like d1/2−1/q. Conclude on ‖∂xL‖q and p ‖∂xL‖q by using Step 2 of the proof of Theorem 4. Finally, note that, even without the symmetry assumption (S), using Lemma 8 shows that
EW [ ‖∂xo‖22 ] = ∑ x∈x EW [ (∂xo) 2 ]
= ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 2 dp 1 2 = 1 .
Thus, with or without (S), ‖∂xo‖2 is independent of the input-dimension d.
B.4 PROOF OF THEOREM 7
To prove Theorem 7, we will actually prove the following more general theorem, which generalizes Theorem 5. Theorem 7 is a straightforward corollary of it. Theorem 11. Consider any feed-forward network with linear connections and ReLU activation functions that outputs logits fk(x) and satisfies assumptions (H). Suppose that there is a fixed multiset of integers {a1, . . . , an} such that each path from input to output traverses exactly n average pooling nodes with degrees {a1, . . . , an}. Then:
‖∂xfk‖2 ∝ 1∏n
i=1
√ ai . (14)
Furthermore, if the net satisfies the symmetry assumption (S), then: |∂xfk| ∝ 1√ d ∏n i=1 ai .
Two remarks. First, in all this proof, “weight” encompasses both the standard random weights, and the constant (deterministic) weights equal to 1/(in-degree) of the average-poolings. Second, assumption H5 implies that the average-pooling nodes have disjoint input nodes: otherwise, there would be two non-zero deterministic weights w,w′ from a same neuron that would hence satisfy: EW [ww′] 6= 0.
Proof. As previously, let o designate any fixed output-logit fk(x). For any path p, let a be the set of average-pooling nodes of p and let q be the set of remaining nodes. Each path-product ωp satisfies: ωp = ωqωa, where ωa is a same fixed constant. For two distinct paths p,p′, Lemma 10 therefore yields: EW [ ω2p ] = ω2a EW [ ω2q ]
and EW [ωpωp′ ] = 0. Combining this with Lemma 9 and under assumption (S), we get similarly to (13):
EW,σ [ (∂xo) 2 ] = ∑
p,p′∈P(x,o)
ωaωa′ EW [ωq ωq′ ]Eσ [σqσq′ ]
= ∑
p∈P(x,o) n∏ i=1 1 a2i ∏ q∈q̃ EW [ ω2q ] Eσ [ σ2q ]
= n∏ i=1 1
ai︸ ︷︷ ︸ same value
for all p
∑ p∈P(x,o) n∏ i=1 1 ai ∏ q∈q̃ 2 dq 1
2︸ ︷︷ ︸∏ p∈p̃
1 dp︸ ︷︷ ︸
= 1d (Lemma 9)
(15)
= 1
d n∏ i=1 1 ai .
Therefore, |∂xo| = |∂xfk| ∝ 1/ √ d ∏n i=1 ai. Again, note that, even without assumption (S), using (15) and Lemma 8 shows that
EW [ ‖∂xo‖22 ] = ∑ x∈x EW,σ [ (∂xo) 2 ]
(15) = ∑ x∈x n∏ i=1 1 ai ∑ p∈P(x,o) n∏ i=1 1 ai ∏ p∈p̃ 2 dp 1 2
= n∏ i=1 1 ai ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 1
dp︸ ︷︷ ︸ =1 (Lemma 8)
= n∏ i=1 1 ai ,
which proves (14).
C COMPARISON TO THE CROSS-LIPSCHITZ REGULARIZER
In their Theorem 2.1, Hein & Andriushchenko (2017) show that the minimal = ‖δ‖p perturbation to fool the classifier must be bigger than:
min k 6=c fc(x)− fk(x) maxy∈B(x, ) ‖∂xfc(y)− ∂xfk(y)‖q . (16)
They argue that the training procedure typically already tries to maximize fc(x)− fk(x), thus one only needs to additionally ensure that ‖∂xfc(x)− ∂xfk(x)‖q is small. They then introduce what they call a Cross-Lipschitz Regularization, which corresponds to the case p = 2 and involves the gradient differences between all classes:
RxLip := 1
K2 K∑ k,h=1 ‖∂xfh(x)− ∂xfk(x)‖22 (17)
In contrast, using (10), (the square of) our proposed regularizer ‖∂xL‖q from (4) can be rewritten, for p = q = 2 as:
R‖·‖2(f) = K∑
k,h=1
qk(x)qh(x) ( ∂xfc(x)− ∂xfk(x) ) ·
· ( ∂xfc(x)− ∂xfh(x) ) (18)
Although both (17) and (18) consist in K2 terms, corresponding to the K2 cross-interaction between the K classes, the big difference is that while in (17) all classes play exactly the same role, in (18) the summands all refer to the target class c in at least two different ways. First, all gradient differences are always taken with respect to ∂xfc. Second, each summand is weighted by the probabilities qk(x) and qh(x) of the two involved classes, meaning that only the classes with a non-negligible probability get their gradient regularized. This reflects the idea that only points near the margin need a gradient regularization, which incidentally will make the margin sharper.
D PERCEPTION THRESHOLD
To keep the average pixel-wise variation constant across dimensions d, we saw in (3) that the threshold p of an `p-attack should scale like d1/p. We will now see another justification for this scaling. Contrary to the rest of this work, where we use a fixed p for all images x, here we will let p depend on the `2-norm of x. If, as usual, the dataset is normalized such that the pixels have on average variance 1, both approaches are almost equivalent.
Suppose that given an `p-attack norm, we want to choose p such that the signal-to-noise ratio (SNR) ‖x‖2 / ‖δ‖2 of a perturbation δ with `p-norm ≤ p is never greater than a given SNR threshold 1/ . For p = 2 this imposes 2 = ‖x‖2. More generally, studying the inclusion of `p-balls in `2-balls yields p = ‖x‖2 d1/p−1/2 . (19) Note that this gives again p = ∞d1/p. This explains how to adjust the threshold with varying `p-attack norm.
Now, let us see how to adjust the threshold of a given `p-norm when the dimension d varies. Suppose that x is a natural image and that decreasing its dimension means either decreasing its resolution or cropping it. Because the statistics of natural images are approximately resolution and scale invariant (Huang, 2000), in either case the average squared value of the image pixels remains unchanged, which implies that ‖x‖2 scales like √ d. Pasting this back into (19), we again get:
p = ∞ d 1/p .
In particular, ∞ ∝ is a dimension-free number, exactly like in (3) of the main part. Now, why did we choose the SNR as our invariant reference quantity and not anything else? One reason is that it corresponds to a physical power ratio between the image and the perturbation, which we think the human eye is sensible to. Of course, the eye’s sensitivity also depends on the spectral frequency of the signals involved, but we are only interested in orders of magnitude here.
Another point: any image x yields an adversarial perturbation δx, where by constraint ‖x‖2 / ‖δx‖ ≤ 1/ . For `2-attacks, this inequality is actually an equality. But what about other `p-attacks: (on average over x,) how far is the signal-to-noise ratio from its imposed upper bound 1/ ? For p 6∈ {1, 2,∞}, the answer unfortunately depends on the pixel-statistics of the images. But when p is 1 or∞, then the situation is locally the same as for p = 2. Specifically: Lemma 12. Let x be a given input and > 0. Let p be the greatest threshold such that for any δ with ‖δ‖p ≤ p, the SNR ‖x‖2 / ‖δ‖2 is ≤ 1/ . Then p = ‖x‖2 d1/p−1/2. Moreover, for p ∈ {1, 2,∞}, if δx is the p-sized `p-attack that locally maximizes the loss-increase i.e. δx = arg max‖δ‖p≤ p |∂xL · δ|, then:
SNR(x) := ‖x‖2 ‖δx‖2 = 1 and Ex [SNR(x)] = 1 .
Proof. The first paragraph follows from the fact that the greatest `p-ball included in an `2-ball of radius ‖x‖2 has radius ‖x‖2 d1/p−1/2.
The second paragraph is clear for p = 2. For p =∞, it follows from the fact that δx = ∞ sign∂xL which satisfies: ‖δx‖2 = ∞ √ d = ‖x‖2. For p = 1, it is because δx = 1 maxi=1..d |(∂xL)i|,
which satisfies: ‖δx‖2 = 2/ √ d = ‖x‖2.
Intuitively, this means that for p ∈ {1, 2,∞}, the SNR of p-sized `p-attacks on any input x will be exactly equal to its fixed upper limit 1/ . And in particular, the mean SNR over samples x is the same (1/ ) in all three cases.
E VULNERABILITY-DIMENSION DEPENDENCE USING DOWNSIZED IMAGENET IMAGES
We also ran a similar experiment as in Section 4.2, but instead of using upsampled CIFAR10 images, we created a 12-class dataset of approximately 80, 000 3 × 256 × 256-sized RGBimages by merging similar ImageNet-classes, resizing the smallest image-edge to 256 pixels and
center-cropping the result. We then downsized the images to 32, 64, 128 and 256 pixels per edge, and trained, not 1, but 10 CNNs per image-size. We then computed their adversarial vulnerability and average ‖∂xL‖1. This gave us 2 values per trained net, i.e. 2 x 10 values per image-size, which are shown in 6. The lines follow their medians, the errorbars show their 10th and 90th quantiles. The conclusions are identical to Section 4.2: after usual training, the vulnerability and gradient-norms still increase like√
d. Note that, as the gradients get much larger at higher dimensions, the first order approximation in (2) becomes less and less valid, which explains the little inflection of the adversarial vulnerability curve. For smaller -thresholds, we verified that the inflection disappears.
F FIGURES WITH AN `2 PERTURBATION-THRESHOLD AND DEEP-FOOL ATTACKS
Here we plot the same curves as in the main part, but using an `2-attack threshold of size 2 = 0.005 √ d instead of the `∞-threshold and deep-fool attacks (Moosavi-Dezfooli et al., 2016) instead of iterative `∞-ones in Figs. 8 and 9. Note that contrary to `∞-thresholds, `2-thresholds must be rescaled by √ d to stay consistent across dimensions (see Eq.3 and Appendix D). All curves look essentially the same as their counterparts in the main text.
G A VARIANT OF ADVERSARIALLY-AUGMENTED TRAINING
In usual adversarially-augmented training, the adversarial image x+ δ is generated on the fly, but is nevertheless treated as a fixed input of the neural net, which means that the gradient does not get backpropagated through δ. This need not be. As δ is itself a function of x, the gradients could actually also be backpropagated through δ. As it was only a one-line change of our code, we used this opportunity to test this variant of adversarial training (FGSM-variant in Figure 2) and thank Martín Arjovsky for suggesting it. But except for an increased computation time, we found no significant difference compared to usual augmented training. | 1. What is the main contribution of the paper regarding adversarial vulnerability and neural networks?
2. What are the strengths and weaknesses of the paper's theoretical analysis and empirical evidence?
3. Do you have any concerns about the paper's motivation, assumptions, or conclusions?
4. How does the paper relate to other works in the field of adversarial attacks and defenses?
5. What are some potential limitations or future directions for research related to this paper's topic? | Review | Review
This paper argues that adversarial vulnerability of neural networks increases with input dimension. Theoretical and empirical evidence are given which connect the l_p norm of the gradient of the training objective with the existence of small-worst case l_q perturbations. This connection is made by assuming that the learned function is well approximated by a linear function local to the sampled input x. By making assumptions on the initialization scheme for some simple architectures, the authors show that the l_p norm of the gradient for randomly initialized network will be large, and provide empirical evidence that these assumptions hold after training. These assumptions imply bounds on the typical magnitude of the gradient of the loss with respect to a single input coordinate, this then implies that the overall gradient norm will depend on the input dimension.
I found this paper well written. The mathematical assumptions are presented in a clear, easy to understand manner. Also high level intuition is given around their main theorems which help the reader understand the main ideas. However, I have a number of concerns about this work.
The first is, I do not buy the motivation for studying the "phenomenon" of small worst-case l_p perturbations. I realize this statement applies to a large body of literature, but since the publication of [1] we are still lacking concrete motivating scenarios for the l_p action space. I would encourage the authors instead to ask the closely related but more general question of how we can improve model generalization outside the natural distribution of images, such as generalization in the presence of commonly occurring image corruptions [2]. It's possible that the analysis in this work could better our understanding model generalization in the presence of different image corruptions, indeed by making similar linearity assumptions as considered in this work, test error in additive Gaussian noise can be linked with distance to the decision boundary [3,4]. However, this particular question was not explored in this work.
Second, the work is one of many to relate the norm of the gradient with adversarial robustness (for example, this has been proposed as a defense mechanism in [5,6]). I also suspect that the main theorem relating gradient norm to initialization should easily follow for more general settings using the mean field theory developed by [7,8] (this would be particularly useful for removing assumption H1, which assumes the ReLU activation is a random variable independent of the weights). Overall, I don't see how gradient norms explain why statistical classifiers make mistakes, particularly for more realistic attacker action spaces [9]. Even for "small" l_p adversarial examples there seem to be limitations as to how much gradient norms can explain the phenomenon --- for example even max margin classifiers such as SVM's have "adversarial examples". Furthermore, adversarial training has been shown to reach a point where the model is "robust" locally to training points but this robustness does not generalize to the points in the test set [10]. In fact, for the synthetic data distributions considered in [10], it's proven that no learning algorithm can achieve robustness given insufficient training data.
Finally, the main conclusion of this work "adversarial vulnerability of neural networks increases with input dimension" is an overly general statement which needs a much more nuanced view. While experiments shown in [11] support this conclusion for naturally trained networks, it is shown that when adversarial training is applied the model is more robust when the input dimension is higher (see Figure 4 a. and b.). Perhaps the assumptions for Theorem 4 are violated for these adversarially trained models.
1. https://arxiv.org/abs/1807.06732
2. https://arxiv.org/abs/1807.01697
3. https://arxiv.org/abs/1608.08967
4. https://openreview.net/forum?id=S1xoy3CcYX¬eId=BklKxJBF57.
5. https://arxiv.org/abs/1704.08847
6. https://arxiv.org/abs/1608.07690
7. https://arxiv.org/abs/1611.01232
8. https://arxiv.org/abs/1806.05393
9. https://arxiv.org/abs/1712.09665
10. https://arxiv.org/abs/1804.11285
11. https://arxiv.org/pdf/1809.02104.pdf |
ICLR | Title
Adversarial Vulnerability of Neural Networks Increases with Input Dimension
Abstract
Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. For most current network architectures, we prove that the `1-norm of these gradients grows as the square root of the input size. These nets therefore become increasingly vulnerable with growing image size. Our proofs rely on the network’s weight distribution at initialization, but extensive experiments confirm that our conclusions still hold after usual training.
1 INTRODUCTION
Following the work of Goodfellow et al. (2015), Convolutional Neural Networks (CNNs) have been found vulnerable to adversarial examples: an adversary can drive the performance of state-of-the art CNNs down to chance level with imperceptible changes of the inputs. A number of studies have tried to address this issue, but only few have stressed that, because adversarial examples are essentially small input changes that create large output variations, they are inherently caused by large gradients of the neural network with respect to its inputs. Of course, this view, which we will focus on here, assumes that the network and loss are differentiable. It has the advantage to yield a large body of specific mathematical tools, but might not be easily extendable to masked gradients, non-smooth models or the 0-1-loss. Nevertheless, our conclusions might even hold for non-smooth models, given that the latter can often be viewed as smooth at a coarser level.
Contributions. More specifically, we provide theoretical and empirical arguments supporting the existence of a monotonic relationship between the gradient norm of the training objective (of a differentiable classifier) and its adversarial vulnerability. Evaluating this norm based on the weight statistics at initialization, we show that CNNs and most feed-forward networks, by design, exhibit increasingly large gradients with input dimension d, almost independently of their architecture. That leaves them increasingly vulnerable to adversarial noise. We corroborate our theoretical results by extensive experiments. Although some of those experiments involve adversarial regularization schemes, our goal is not to advocate a new adversarial defense (these schemes are already known), but to show how their effect can be explained by our first order analysis. We do not claim to explain all aspects of adversarial vulnerability, but we claim that our first order argument suffices to explain a significant part of the empirical findings on adversarial vulnerability. This calls for researching the design of neural network architectures with inherently smaller gradients and provides useful guidelines to practitioners and network designers.
2 FROM ADVERSARIAL EXAMPLES TO LARGE GRADIENTS
Suppose that a given classifier ϕ classifies an image x as being in category ϕ(x). An adversarial image is a small modification of x, barely noticeable to the human eye, that suffices to fool the classifier into predicting a class different from ϕ(x). It is a small perturbation of the inputs, that creates a large variation of outputs. Adversarial examples thus seem inherently related to large gradients of the network. A connection, that we will now clarify. Note that visible adversarial examples sometimes appear in the literature, but we deliberately focus on imperceptible ones.
Adversarial vulnerability and adversarial damage. In practice, an adversarial image is constructed by adding a perturbation δ to the original image x such that ‖δ‖ ≤ for some (small) number and a given norm ‖·‖ over the input space. We call the perturbed input x+ δ an -sized ‖·‖-attack and say that the attack was successful when ϕ(x+ δ) 6= ϕ(x). This motivates Definition 1. Given a distribution P over the input-space, we call adversarial vulnerability of a classifier ϕ to an -sized ‖·‖-attack the probability that there exists a perturbation δ of x such that
‖δ‖ ≤ and ϕ(x) 6= ϕ(x+ δ) . (1)
We call the average increase-after-attack Ex∼P [∆L] of a loss L the (L-) adversarial damage (of the classifier ϕ to an -sized ‖·‖-attack).
When L is the 0-1-loss L0/1, adversarial damage is the accuracy-drop after attack. The 0-1-loss damage is always smaller than adversarial vulnerability, because vulnerability counts all class-changes of ϕ(x), whereas some of them may be neutral to adversarial damage (e.g. a change between two wrong classes). The L0/1-adversarial damage thus lower bounds adversarial vulnerability. Both are even equal when the classifier is perfect (before attack), because then every change of label introduces an error. It is hence tempting to evaluate adversarial vulnerability with L0/1-adversarial damage.
From ∆L0/1 to ∆L and to ∂xL. In practice however, we do not train our classifiers with the non-differentiable 0-1-loss but use a smoother loss L, such as the cross-entropy loss. For similar reasons, we will now investigate the adversarial damage Ex [∆L(x, c)] with loss L rather than L0/1. Like for Goodfellow et al. (2015); Lyu et al. (2015); Sinha et al. (2018) and many others, a classifier ϕ will hence be robust if, on average over x, a small adversarial perturbation δ of x creates only a small variation δL of the loss. Now, if ‖δ‖ ≤ , then a first order Taylor expansion in shows that
δL = max δ : ‖δ‖≤ |L(x+ δ, c)− L(x, c)| ≈ max δ : ‖δ‖≤ |∂xL · δ| = |||∂xL|||, (2)
where ∂xL denotes the gradient of L with respect to x, and where the last equality stems from the definition of the dual norm |||·||| of ‖·‖. Now two remarks. First: the dual norm only kicks in because we let the input noise δ optimally adjust to the coordinates of ∂xL within its -constraint. This is the brand mark of adversarial noise: the different coordinates add up, instead of statistically canceling each other out as they would with random noise. For example, if we impose that ‖δ‖2 ≤ , then δ will strictly align with ∂xL. If instead ‖δ‖∞ ≤ , then δ will align with the sign of the coordinates of ∂xL. Second remark: while the Taylor expansion in (2) becomes exact for infinitesimal perturbations, for finite ones it may actually be dominated by higher-order terms. Our experiments (Figures 1 & 2) however strongly suggest that in practice the first order term dominates the others. Now, remembering that the dual norm of an `p-norm is the corresponding `q-norm, and summarizing, we have proven Lemma 2. At first order approximation in , an -sized adversarial attack generated with norm ‖·‖ increases the loss L at point x by |||∂xL|||, where |||·||| is the dual norm of ‖·‖. In particular, an -sized `p-attack increases the loss by ‖∂xL‖q where 1 ≤ p ≤ ∞ and 1p + 1q = 1.
Consequently, the adversarial damage of a classifier with lossL to -sized attacks generated with norm ‖·‖ is Ex|||∂xL|||. This is valid only at first order, but it proves that at least this kind of first-order vulnerability is present. We will see that the first-order predictions closely match the experiments, and that this insight helps protecting even against iterative (non-first-order) attack methods (Figure 1).
Calibrating the threshold to the attack-norm ‖·‖. Lemma 2 shows that adversarial vulnerability depends on three main factors: (i) ‖·‖ , the norm chosen for the attack (ii) , the size of the attack, and (iii) Ex|||∂xL||| , the expected dual norm of ∂xL. We could see Point (i) as a measure of our sensibility to image perturbations, (ii) as our sensibility threshold, and (iii) as the classifier’s expected marginal sensibility to a unit perturbation. Ex|||∂xL||| hence intuitively captures the discrepancy between our perception (as modeled by ‖·‖) and the classifier’s perception for an input-perturbation of small size . Of course, this viewpoint supposes that we actually found a norm ‖·‖ (or more generally a metric) that faithfully reflects human perception – a project in its own right, far beyond the scope of this paper. However, it is clear that the threshold that we choose should depend on the norm ‖·‖ and hence on the input-dimension d. In particular, for a given pixel-wise order of magnitude of the perturbations δ, the `p-norm of the perturbation will scale like d1/p. This suggests to write the threshold p used with `p-attacks as:
p = ∞ d 1/p , (3)
where ∞ denotes a dimension-independent constant. In Appendix D we show that this scaling also preserves the average signal-to-noise ratio ‖x‖2 / ‖δ‖2, both across norms and dimensions, so that p could correspond to a constant human perception-threshold. With this in mind, the impatient reader may already jump to Section 3, which contains our main contributions: the estimation of Ex‖∂xL‖q for standard feed-forward nets. Meanwhile, the rest of this section shortly discusses two straightforward defenses that we will use later and that further illustrate the role of gradients.
A new old regularizer. Lemma 2 shows that the loss of the network after an 2 -sized ‖·‖-attack is
L ,|||·|||(x, c) := L(x, c) + 2 |||∂xL||| . (4)
It is thus natural to take this loss-after-attack as a new training objective. Here we introduced a factor 2 for reasons that will become clear in a moment. Incidentally, for ‖·‖ = ‖·‖2, this new loss reduces to an old regularization-scheme proposed by Drucker & LeCun (1991) called double-backpropagation. At the time, the authors argued that slightly decreasing a function’s or a classifier’s sensitivity to input perturbations should improve generalization. In a sense, this is exactly our motivation when defending against adversarial examples. It is thus not surprising to end up with the same regularization term. Note that our reasoning only shows that training with one specific norm |||·||| in (4) helps to protect against adversarial examples generated from ‖·‖. A priori, we do not know what will happen for attacks generated with other norms; but our experiments suggest that training with one norm also protects against other attacks (see Figure 2 and Section 4.1).
Link to adversarially-augmented training. In (1), designates an attack-size threshold, while in (4), it is a regularization-strength. Rather than a notation conflict, this reflects an intrinsic duality between two complementary interpretations of , which we now investigate further. Suppose that, instead of using the loss-after-attack, we augment our training set with -sized ‖·‖-attacks x + δ, where for each training point x, the perturbation δ is generated on the fly to locally maximize the loss-increase. Then we are effectively training with
L̃ ,‖·‖(x, c) := 1
2 (L(x, c) + L(x+ δ, c)) , (5)
where by construction δ satisfies (2). We will refer to this technique as adversarially augmented training. It was first introduced by Goodfellow et al. (2015) with ‖·‖ = ‖·‖∞ under the name of FGSM1-augmented training. Using the first order Taylor expansion in of (2), this ‘old-plus-postattack’ loss of (5) simply reduces to our loss-after-attack, which proves
Proposition 3. Up to first-order approximations in , L̃ ,‖·‖ = L ,|||·||| . Said differently, for small enough , adversarially-augmented training with -sized ‖·‖-attacks amounts to penalizing the dual norm |||·||| of ∂xL with weight /2. In particular, double-backpropagation corresponds to training with `2-attacks, while FGSM-augmented training corresponds to an `1-penalty on ∂xL.
This correspondence between training with perturbations and using a regularizer can be compared to Tikhonov regularization: Tikhonov regularization amounts to training with random noise Bishop (1995), while training with adversarial noise amounts to penalizing ∂xL. Section 4.1 verifies the correspondence between adversarial augmentation and gradient regularization empirically, which also strongly suggests the empirical validity of the first-order Taylor expansion in (2).
3 ESTIMATING ‖∂xL‖q TO EVALUATE ADVERSARIAL VULNERABILITY
In this section, we evaluate the size of ‖∂xL‖q for standard neural network architectures. We start with fully-connected networks, and finish with a much more general theorem that, not only encompasses CNNs (with or without strided convolutions), but also shows that the gradient-norms are essentially independent of the network topology. We start our analysis by showing how changing q affects the size of ‖∂xL‖q . Suppose for a moment that the coordinates of ∂xL have typical magnitude |∂xL|. Then ‖∂xL‖q scales like d1/q|∂xL|. Consequently
p ‖∂xL‖q ∝ p d1/q |∂xL| ∝ d |∂xL| . (6) 1FGSM = Fast Gradient Sign Method
This equation carries two important messages. First, we see how ‖∂xL‖q depends on d and q. The dependence seems highest for q = 1. But once we account for the varying perceptibility threshold p ∝ d1/p, we see that adversarial vulnerability scales like d · |∂xL|, whatever `p-norm we use. Second, (6) shows that to be robust against any type of `p-attack at any input-dimension d, the average absolute value of the coefficients of ∂xL must grow slower than 1/d. Now, here is the catch, which brings us to our core insight.
3.1 CORE IDEA: ONE NEURON WITH MANY INPUTS
In order to preserve the activation variance of the neurons from layer to layer, the neural weights are usually initialized with a variance that is inversely proportional to the number of inputs per neuron. Imagine for a moment that the network consisted only of one output neuron o linearly connected to all input pixels. For the purpose of this example, we assimilate o and L. Because we initialize the weights with a variance of 1/d, their average absolute value |∂xo| ≡ |∂xL| grows like 1/ √ d, rather than the required 1/d. By (6), the adversarial vulnerability ‖∂xo‖q ≡ ‖∂xL‖q therefore increases like d/ √ d = √ d.
This toy example shows that the standard initialization scheme, which preserves the variance from layer to layer, causes the average coordinate-size |∂xL| to grow like 1/ √ d instead of 1/d. When an `∞-attack tweaks its -sized input-perturbations to align with the coordinate-signs of ∂xL, all coordinates of ∂xL add up in absolute value, resulting in an output-perturbation that scales like √ d and leaves the network increasingly vulnerable with growing input-dimension.
3.2 GENERALIZATION TO DEEP NETWORKS
Our next theorems generalize the previous toy example to a very wide class of feedforward nets with ReLU activation functions. For illustration purposes, we start with fully connected nets and only then proceed to the broader class, which includes any succession of (possibly strided) convolutional layers. In essence, the proofs iterate our insight on one layer over a sequence of layers. They all rely on the following set (H) of hypotheses:
H1 Non-input neurons are followed by a ReLU killing half of its inputs, independently of the weights. H2 Neurons are partitioned into layers, meaning groups that each path traverses at most once. H3 All weights have 0 expectation and variance 2/(in-degree) (‘He-initialization’). H4 The weights from different layers are independent. H5 Two distinct weights w,w′ from a same node satisfy E [ww′] = 0.
If we follow common practice and initialize our nets as proposed by He et al. (2015), then H3-H5 are satisfied at initialization by design, while H1 is usually a very good approximation (Balduzzi et al., 2017). Note that such i.i.d. weight assumptions have been widely used to analyze neural nets and are at the heart of very influential and successful prior work (e.g., equivalence between neural nets and Gaussian processes as pioneered by Neal 1996). Nevertheless, they do not hold after training. That is why all our statements in this section are to be understood as orders of magnitudes that are very well satisfied at initialization in theory and in practice, and that we will confirm experimentally after training in Section 4. Said differently, while our theorems rely on the statistics of neural nets at initialization, our experiments confirm their conclusions after training.
Theorem 4 (Vulnerability of Fully Connected Nets). Consider a succession of fully connected layers with ReLU activations which takes inputs x of dimension d, satisfies assumptions (H), and outputs logits fk(x) that get fed to a final cross-entropy-loss layer L. Then the coordinates of ∂xfk grow like 1/ √ d, and
‖∂xL‖q ∝ d 1 q− 1 2 and p ‖∂xL‖q ∝
√ d . (7)
These networks are thus increasingly vulnerable to `p-attacks with growing input-dimension.
Theorem 4 is a special case of the next theorem, which will show that the previous conclusions are essentially independent of the network-topology. We will use the following symmetry assumption on the neural connections. For a given path p, let the path-degree dp be the multiset of encountered in-degrees along path p. For a fully connected network, this is the unordered sequence of layer-sizes
preceding the last path-node, including the input-layer. Now consider the multiset {dp}p∈P(x,o) of all path-degrees when p varies among all paths from input x to output o. The symmetry assumption (relatively to o) is
(S) All input nodes x have the same multiset {dp}p∈P(x,o) of path-degrees from x to o. Intuitively, this means that the statistics of degrees encountered along paths to the output are the same for all input nodes. This symmetry assumption is exactly satisfied by fully connected nets, almost satisfied by CNNs (up to boundary effects, which can be alleviated via periodic or mirror padding) and exactly satisfied by strided layers, if the layer-size is a multiple of the stride.
Theorem 5 (Vulnerability of Feedforward Nets). Consider any feed-forward network with linear connections and ReLU activation functions. Assume the net satisfies assumptions (H) and outputs logits fk(x) that get fed to the cross-entropy-loss L. Then ‖∂xfk‖2 is independent of the input dimension d and 2 ‖∂xL‖2 ∝ √ d. Moreover, if the net satisfies the symmetry assumption (S), then
|∂xfk| ∝ 1/ √ d and (7) still holds: ‖∂xL‖q ∝ d 1 q− 1 2 and p ‖∂xL‖q ∝ √ d.
Theorems 4 and 5 are proven in Appendix B. The main proof idea is that in the gradient norm computation, the He-initialization exactly compensates the combinatorics of the number of paths in the network, so that this norm becomes independent of the network topology. In particular, we get
Corollary 6 (Vulnerability of CNNs). In any succession of convolution and dense layers, strided or not, with ReLU activations, that satisfies assumptions (H) and outputs logits that get fed to the cross-entropy-loss L, the gradient of the logit-coordinates scale like 1/ √ d and (7) is satisfied. It is hence increasingly vulnerable with growing input-resolution to attacks generated with any `p-norm.
Appendix A shows that the network gradient are dampened when replacing strided layers by average poolings, essentially because average-pooling weights do not follow the He-init assumption H3.
4 EMPIRICAL RESULTS
In Section 4.1, we empirically verify the validity of the first-order Taylor approximation made in (2) (Fig.1), for example by checking the correspondence between loss-gradient regularization and adversarially-augmented training (Fig.2). Section 4.2 then empirically verifies that both the average `1-norm of ∂xL and the adversarial vulnerability grow like √ d as predicted by Corollary 6. For all experiments, we approximate adversarial vulnerability using various attacks of the Foolboxpackage (Rauber et al., 2017). We use an `∞ attack-threshold of size ∞ = 0.005 (and later 0.002) which, for pixel-values ranging from 0 to 1, is completely imperceptible but suffices to fool the classifiers on a significant proportion of examples. This ∞-threshold should not be confused with the regularization-strengths appearing in (4) and (5), which will be varied in some experiments.
4.1 FIRST-ORDER APPROXIMATION, GRADIENT PENALTY, ADVERSARIAL AUGMENTATION
We train several CNNs with same architecture to classify CIFAR-10 images (Krizhevsky, 2009). For each net, we use a specific training method with a specific regularization value . The training methods used were `1- and `2-penalization of ∂xL (Eq. 4), adversarial augmentation with `∞- and `2- attacks (Eq. 5), projected gradient descent (PGD) with randomized starts (7 steps per attack with step-size = .2 ∞; see Madry et al. 2018) and the cross-Lipschitz regularizer (Eq. 17 in Appendix C). We then test the adversarial vulnerability of each trained network using the following attack-methods: single-step `∞- (FGSM) and `2-attacks, iterative `∞- (PGD) and `2-attacks, and DeepFool attacks (Moosavi-Dezfooli et al., 2016). All networks have 6 ‘strided convolution→ batchnorm→ ReLU’ layers with strides [1, 2, 2, 2, 2, 2] respectively and 64 output-channels each, followed by a final fully-connected linear layer. Results are summarized in Figures 1 and 2. Figure 1 fixes the training method – gradient `1-regularization – and plots the obtained adversarial vulnerabilities for various attacks types. Figure 2 fixes the attack type – iterative `∞-attacks – but plots the curves obtained for various training methods. Note that our goal here is not to advocate one defense over another, but rather to check the validity of the Taylor expansion, and empirically verify that first order terms (i.e., gradients) suffice to explain much of the observed adversarial vulnerability. Similarly, our goal in testing several attacks (Figure 1) is not to present a specifically strong one, but rather to verify that for all attacks, the trends are the same: the vulnerability grows with increasing gradients.
Validity of first order expansion. The following observations support the validity of the first order Taylor expansion in (2) and suggest that it is a crucial component of adversarial vulnerability: (i) the efficiency of the first-order defense against iterative (non-first-order) attacks (Fig.1a); (ii) the striking similarity between the PGD curves (adversarial augmentation with iterative attacks) and the other adversarial training training curves (one-step attacks/defenses); (iii) the functional-like dependence between any approximation of adversarial vulnerability and Ex‖∂xL‖1 (Fig.1b), and its independence on the training method (Fig.2d). (iv) the excellent correspondence between the gradient-regularization and adversarial training curves (see next paragraph). Said differently, adversarial examples seem indeed to be primarily caused by large gradients of the classifier as captured via the induced loss. 2
2On Figure 1, the two `∞-attacks seem more efficient than the others, because we chose an `∞ perturbation threshold ( ∞). With an `2-threshold it is the opposite (see Figure 7, Appendix F).
Illustration of Proposition 3. The upper row of Figure 2 plots Ex‖∂xL1‖, adversarial vulnerability and accuracy as a function of d1/p. The excellent match between the adversarial augmentation curve with p = ∞ (p = 2) and its gradient-regularization dual counterpart with q = 1 (resp. q = 2) illustrates the duality between as a threshold for adversarially-augmented training and as a regularization constant in the regularized loss (Proposition 3). It also supports the validity of the first-order Taylor expansion in (2).
Confirmation of (3). Still on the upper row, the curves for p =∞, q = 1 have no reason to match those for p = q = 2 when plotted against , because -threshold is relative to a specific attack-norm. However, (3) suggested that the rescaled thresholds d1/p may approximately correspond to a same ‘threshold-unit’ across `p-norms and across dimension. This is well confirmed by the upper row plots: by rescaling the x-axis, the p = q = 2 and q = 1, p =∞ curves get almost super-imposed. Accuracy-vs-Vulnerability Trade-Off. Merging Figures 2b and 2c by taking out , Figure 2f shows that all gradient regularization and adversarial training methods yield equivalent accuracyvulnerability trade-offs. Incidentally, for higher penalization values, these trade-offs appear to be much better than those given by cross Lipschitz regularization.
The penalty-norm does not matter. We were surprised to see that on Figures 2d and 2f, the L ,q curves are almost identical for q = 1 and 2. This indicates that both norms can be used interchangeably in (4) (modulo proper rescaling of via (3)), and suggests that protecting against a specific attacknorm also protects against others. (6) may provide an explanation: if the coordinates of ∂xL behave like centered, uncorrelated variables with equal variance –which follows from assumptions (H) –, then the `1- and `2-norms of ∂xL are simply proportional. Plotting Ex‖∂xL(x)‖2 against Ex‖∂xL(x)‖1 in Figure 2e confirms this explanation. The slope is independent of the training method. Therefore, penalizing ‖∂xL(x)‖1 during training will not only decrease Ex‖∂xL‖1 (as shown in Figure 2a), but also drive down Ex‖∂xL‖2 and vice-versa.
4.2 VULNERABILITY GROWS WITH INPUT RESOLUTION
Theorems 4-5 and Corollary 6 predict a linear growth of the average `1-norm of ∂xL with the square root of the input dimension d, and therefore also of adversarial vulnerability (Lemma 2). To test these predictions, we upsampled the CIFAR-10 images (of size 3 x 32 x 32) by copying pixels so as to get 4 datasets with, respectively, 32, 64, 128 and 256 pixels per edge. We then trained a CNN on each dataset
and computed their adversarial vulnerability (with iterative `∞-attacks, threshold ∞ = .002) and average ‖∂xL‖1 over the last 20 epochs on the same held-out test-dataset. This gave us 2 x 20-values per net and imagesize, summarized in Figure 3. The dashed-lines follow their medians and the errorbars show their 10th and 90th quantiles. As predicted by our theorems, both ‖∂xL‖1 and adversarial vulnerability grow approximately linearly with √ d. We also ran a similar experiment on downsized ImageNet images, where we train several identical nets per image-size rather than just one. Conclusions are unchanged. See Appendix E.
All networks had exactly the same amount of parameters and very similar structure across the various input-resolutions. The CNNs were a succession of 8 ‘convolution→ batchnorm→ ReLU’ layers with 64 output channels, followed by a final full-connection to the 12 logit-outputs. We used 2× 2- max-poolings after the convolutions of layers 2,4, 6 and 8, and a final max-pooling after layer 8 that fed only 1 neuron per channel to the fully-connected layer. To ensure that the convolution-kernels cover similar ranges of the images across each of the 32, 64, 128 and 256 input-resolutions, we respectively dilated all convolutions (‘à trous’) by a factor 1, 2, 4 and 8.
5 DISCUSSIONS
5.1 IMPLICATIONS: WHY PRIOR VULNERABILITY MAY MATTER
Our theoretical results show that the priors of classical neural networks yield vulnerable functions because of naturally high gradients. And our experiments (Fig 3&6) suggest that usual training does not escape these prior properties. But how may these insights help understanding the vulnerability of robustly trained networks? Clearly, to be successful, robust training algorithms must escape ill-behaved priors, which explains why most methods (e.g. FGSM, PGD) are essentially gradient penalization techniques. But, MNIST aside, even state-of-the-art methods largely fail at protecting current network architectures (Madry et al., 2018), and understanding why is motivation to this and many other papers. Interestingly, Schmidt et al. (2018) recently noticed that those methods actually do protect the nets on training examples, but fail to generalize to the test set. They hence conclude that state-of-the-art robustification algorithms work, but need more data. Alternatively however, when generalization fails, one can also reduce the model’s complexity. Large fully connected nets for example typically fail to generalize to out-of-sample examples: getting similar accuracies than CNNs would need prohibitively many training points. Similarly, Schmidt et al.’s observations may suggest that, outside the training points, networks tend to recover their prior properties, i.e. naturally large gradients. Figure 4 corroborates this hypothesis. It plots the evolution over training epochs of the `1-gradient-norms of the CNNs from Section 4.2 (Fig 3) on the training and test sets respectively. The discrepancy is unmistakable: after a brief initialization phase, the norms decrease on the training set, but increase on the test set. They are moreover almost input-dimension independent on the training set, but scale as √ d on the test set (as seen in Fig 3) up to respectively 2, 4, 8 and 16 times the training set values. These observations suggest that, with the current amount of data, tackling adversarial vulnerability may require new architectures with inherently smaller gradients. Searching these architectures among those with well-behaved prior-gradients seems a reasonable start, where our theoretical results may prove very useful.3
5.2 RELATED LITERATURE
On network vulnerability. Goodfellow et al. (2015) already stressed that adversarial vulnerability increases with growing dimension d. But their argument only relied on a linear ‘one-output-to-manyinputs’-model with dimension-independent weights. They therefore concluded on a linear growth of adversarial vulnerability with d. In contrast, our theory applies to almost any standard feed-forward architecture (not just linear), and shows that, once we adjust for the weight’s dimension-dependence, adversarial vulnerability increases like √ d (not d), almost independently of the architecture. Nevertheless, our experiments confirm Goodfellow et al.’s idea that our networks are “too linear-like”, in the sense that a first-order Taylor expansion is indeed sufficient to explain the adversarial vulnerability of neural networks. As suggested by the one-output-to-many-inputs model, the culprit is that growing
3Appendix A investigates such a preliminary direction by introducing average poolings, which have a weight-size 1/in−channels rather than the typical 1/√in−channels of the other He-initialized weights.
dimensionality gives the adversary more and more room to ‘wriggle around’ with the noise and adjust to the gradient of the output neuron. This wriggling, we show, is still possible when the output is connected to all inputs only indirectly, even when no neuron is directly connected to all inputs, like in CNNs. This explanation of adversarial vulnerability is independent of the intrinsic dimensionality or geometry of the data (compare to Amsaleg et al. 2017; Gilmer et al. 2018). Finally, let us mention that Fawzi et al. (2016) show a close link between the vulnerability to small worst-case perturbation (as studied here) and larger average perturbations. Our findings on the adversarial vulnerability NNs to small perturbation could thus be translated accordingly.
On robustification algorithms. Incidentally, Goodfellow et al. (2015) also already relate adversarial vulnerability to large gradients of the loss L, an insight at the very heart of their FGSM-algorithm. They however do not propose any explicit penalizer on the gradient of L other than indirectly through adversarially-augmented training. Conversely, Ross & Doshi-Velez (2018) propose the old double-backpropagation to robustify networks but make no connection to FGSM and adversarial augmentation. Lyu et al. (2015) discuss and use the connection between gradient-penalties and adversarial augmentation, but never actually compare both in experiments. This comparison however is essential to test the validity of the first-order Taylor expansion in (2), as confirmed by the similarity between the gradient-regularization and adversarial-augmentation curves in Figure 2. Hein & Andriushchenko (2017) derived yet another gradient-based penalty –the cross-Lipschitz-penalty– by considering (and proving) formal guarantees on adversarial vulnerability itself, rather than adversarial damage. While both penalties are similar in spirit, focusing on the adversarial damage rather than vulnerability has two main advantages. First, it achieves better accuracy-to-vulnerability ratios, both in theory and practice, because it ignores class-switches between misclassified examples and penalizes only those that reduce the accuracy. Second, it allows to deal with one number only, ∆L, whereas Hein & Andriushchenko’s cross-Lipschitz regularizer and theoretical guarantees explicitly involve all K logit-functions (and their gradients). See Appendix C. Penalizing network-gradients is also at the heart of contractive auto-encoders as proposed by Rifai et al. (2011), where it is used to regularize the encoder-features. Seeing adversarial training as a generalization method, let us also mention Hochreiter & Schmidhuber (1995), who propose to enhance generalization by searching for parameters in a “flat minimum region” of the loss. This leads to a penalty involving the gradient of the loss, but taken with respect to the weights, rather than the inputs. In the same vein, a gradientregularization of the loss of generative models also appears in Proposition 6 of Ollivier (2014), where it stems from a code-length bound on the data (minimum description length). More generally, the gradient regularized objective (4) is essentially the first-order approximation of the robust training objective max‖δ‖≤ L(x+ δ, c) which has a long history in math (Wald, 1945), machine learning (Xu et al., 2009) and now adversarial vulnerability (Sinha et al., 2018). Finally, Cisse et al. (2017) propose new network-architectures that have small gradients by design, rather than by special training: an approach that makes all the more sense, considering the conclusion of Theorems 4 and 5. For further details and references on adversarial attacks and defenses, we refer to Yuan et al. (2017).
6 CONCLUSION
For differentiable classifiers and losses, we showed that adversarial vulnerability increases with the gradients ∂xL of the loss, which is confirmed by the near-perfect functional relationship between gradient norms and vulnerability (Figures 1&2d). We then evaluated the size of ‖∂xL‖q and showed that, at initialization, usual feed-forward nets (convolutional or fully connected) are increasingly vulnerable to `p-attacks with growing input dimension d (the image-size), almost independently of their architecture. Our experiments show that, on the tested architectures, usual training escapes those prior gradient (and vulnerability) properties on the training, but not on the test set. Schmidt et al. (2018) suggest that alleviating this generalization gap requires more data. But a natural (complementary) alternative would be to search for architectures with naturally smaller gradients, and in particular, with well-behaved priors. Despite all their limitations (being only first-order, assuming a prior weight-distribution and a differentiable loss and architecture), our theoretical insights may thereby still prove to be precious future allies.
A EFFECTS OF STRIDED AND AVERAGE-POOLING LAYERS ON ADVERSARIAL VULNERABILITY
It is common practice in CNNs to use average-pooling layers or strided convolutions to progressively decrease the number of pixels per channel. Corollary 6 shows that using strided convolutions does not protect against adversarial examples. However, what if we replace strided convolutions by convolutions with stride 1 plus an average-pooling layer? Theorem 5 considers only randomly initialized weights with typical size 1/ √ in-degree. Average-poolings however introduce deterministic weights of size 1/(in-degree). These are smaller and may therefore dampen the input-to-output gradients and protect against adversarial examples. We confirm this in our next theorem, which uses a slightly modified version (H′) of (H) to allow average pooling layers. (H′) is (H), but where the He-init H3 applies to all weights except the (deterministic) average pooling weights, and where H1 places a ReLU on every non-input and non-average-pooling neuron.
Theorem 7 (Effect of Average-Poolings). Consider a succession of convolution layers, dense layers and n average-pooling layers, in any order, that satisfies (H′) and outputs logits fk(x). Assume the n average pooling layers have a stride equal to their mask size and perform averages over a1, ..., an nodes respectively. Then ‖∂xfk‖2 and |∂xfk| scale like 1/ √ a1 · · · an and 1/ √ d a1 · · · an respectively.
Proof in Appendix B.4. Theorem 7 suggest to try and replace any strided convolution by its non-strided counterpart, followed by an average-pooling layer. It also shows that if we systematically reduce the number of pixels per channel down to 1 by using only non-strided convolutions and average-pooling layers (i.e. d = ∏n i=1 ai), then all input-to-output gradients should become independent of d, thereby making the network completely robust to adversarial examples.
Our following experiments (Figure 5) show that after training, the networks get indeed robustified to adversarial examples, but remain more vulnerable than suggested by Theorem 7.
Experimental setup. Theorem 7 shows that, contrary to strided layers, average-poolings should decrease adversarial vulnerability. We tested this hypothesis on CNNs trained on CIFAR-10, with 6 blocks of ‘convolution → BatchNorm→ReLU’ with 64 output-channels, followed by a final average pooling feeding one neuron per channel to the last fully-connected linear layer. Additionally, after every second convolution, we placed a pooling layer with stride and mask-size (2, 2) (thus acting on 2× 2
neurons at a time, without overlap). We tested average-pooling, strided and max-pooling layers and trained 20 networks per architecture. Results are shown in Figure 5. All accuracies are very close, but, as predicted, the networks with average pooling layers are more robust to adversarial images than the others. However, they remain more vulnerable than what would follow from Theorem 7. We also noticed that, contrary to the strided architectures, their gradients after training are an order of magnitude higher than at initialization and than predicted. This suggests that assumptions (H) get more violated when using average-poolings instead of strided layers. Understanding why will need further investigations.
B PROOFS
B.1 PROOF OF PROPOSITION 3
Proof. Let δ be an adversarial perturbation with ‖δ‖ = 1 that locally maximizes the loss increase at point x, meaning that δ = arg max‖δ′‖≤1∂xL · δ′. Then, by definition of the dual norm of ∂xL we have: ∂xL · ( δ) = |||∂xL|||. Thus
L̃ ,‖·‖(x, c) = 1 2 (L(x, c) + L(x+ δ, c)) = 1 2 (2L(x, c) + |∂xL · δ|+ o(‖δ‖)) =
= L(x, c) + 2 |||∂xL|||+ o( ) = L ,|||·|||(x, c) + o( ) .
B.2 PROOF OF THEOREM 4
Proof. Let x designate a generic coordinate of x. To evaluate the size of ‖∂xL‖q, we will evaluate the size of the coordinates ∂xL of ∂xL by decomposing them into
∂xL = K∑ k=1 ∂L ∂fk ∂fk ∂x =: K∑ k=1 ∂kL ∂xfk,
where fk(x) denotes the logit-probability of x belonging to class k. We now investigate the statistical properties of the logit gradients ∂xfk, and then see how they shape ∂xL.
Step 1: Statistical properties of ∂xfk. Let P(x, k) be the set of paths p from input neuron x to output-logit k. Let p− 1 and p be two successive neurons on path p, and p̃ be the same path p but without its input neuron. Let wp designate the weight from p− 1 to p and ωp be the path-product ωp := ∏ p∈p̃ wp. Finally, let σp (resp. σp) be equal to 1 if the ReLU of node p (resp. if path p) is active for input x, and 0 otherwise.
As previously noticed by Balduzzi et al. (2017) using the chain rule, we see that ∂xfk is the sum of all ωp whose path is active, i.e. ∂xfk(x) = ∑ p∈P(x,k) ωpσp. Consequently:
EW,σ [ ∂xfk(x) 2 ] = ∑
p∈P(x,k) ∏ p∈p̃ EW [ w2p ] Eσ [ σ2p ]
= |P(x, k)| ∏ p∈p̃ 2 dp−1 1 2 = ∏ p∈p̃ dp · ∏ p∈p̃ 1 dp−1 = 1 d . (8)
The first equality uses H1 to decouple the expectations over weights and ReLUs, and then applies Lemma 10 of Appendix B.3, which uses H3-H5 to kill all cross-terms and take the expectation over weights inside the product. The second equality uses H3 and the fact that the resulting product is the same for all active paths. The third equality counts the number of paths from x to k and we conclude by noting that all terms cancel out, except dp−1 from the input layer which is d. Equation 8 shows that |∂xfk| ∝ 1/ √ d.
Step 2: Statistical properties of ∂kL and ∂xL. Defining qk(x) := e fk(x)∑K
h=1 e fh(x) (the probability of image x belonging to class k according to the network), we have, by definition of the cross-entropy loss, L(x, c) := − log qc(x), where c is the label of the target class. Thus:
∂kL(x) = { −qk(x) if k 6= c 1− qc(x) otherwise, and
∂xL(x) = (1− qc) ∂xfc(x) + ∑ k 6=c qk (−∂xfk(x)). (9)
Using again Lemma 10, we see that the ∂xfk(x) are K centered and uncorrelated variables. So ∂xL(x) is approximately the sum of K uncorrelated variables with zero-mean, and its total variance is given by ( (1 − qc)2 + ∑ k 6=c q 2 k ) /d. Hence the magnitude of ∂xL(x) is 1/ √ d for all x, so the `q-norm of the full input gradient is d1/q−1/2. (6) concludes.
Remark 1. Equation 9 can be rewritten as
∂xL(x) = K∑ k=1 qk(x) ( ∂xfc(x)− ∂xfk(x) ) . (10)
As the term k = c disappears, the norm of the gradients ∂xL(x) appears to be controlled by the total error probability. This suggests that, even without regularization, trying to decrease the ordinary
classification error is still a valid strategy against adversarial examples. It reflects the fact that when increasing the classification margin, larger gradients of the classifier’s logits are needed to push images from one side of the classification boundary to the other. This is confirmed by Theorem 2.1 of Hein & Andriushchenko (2017). See also (16) in Appendix C.
B.3 PROOF OF THEOREM 5
The proof of Theorem 5 is very similar to the one of Theorem 4, but we will need to first generalize the equalities appearing in (8). To do so, we identify the computational graph of a neural network to an abstract Directed Acyclic Graph (DAG) which we use to prove the needed algebraic equalities. We then concentrate on the statistical weight-interactions implied by assumption (H), and finally throw these results together to prove the theorem. In all the proof, o will designate one of the output-logits fk(x).
Lemma 8. Let x be the vector of inputs to a given DAG, o be any leaf-node of the DAG, x a generic coordinate of x. Let p be a path from the set of paths P(x, o) from x to o, p̃ the same path without node x, p a generic node in p̃, and dp be its input-degree. Then:∑
x∈x ∑ p̃∈P(x,o) ∏ p∈p̃ 1 dp = 1 (11)
Proof. We will reason on a random walk starting at o and going up the DAG by choosing any incoming node with equal probability. The DAG being finite, this walk will end up at an input-node x with probability 1. Each path p is taken with probability ∏ p∈p̃ 1 dp
. And the probability to end up at an input-node is the sum of all these probabilities, i.e. ∑ x∈x ∑ p∈P(x,o) ∏ p∈p d −1 p , which concludes.
The sum over all inputs x in (11) being 1, on average it is 1/d for each x, where d is the total number of inputs (i.e. the length of x). It becomes an equality under assumption (S): Lemma 9. Under the symmetry assumption (S), and with the previous notations, for any input x ∈ x: ∑
p∈P(x,o) ∏ p∈p̃ 1 dp = 1 d . (12)
Proof. Let us denote D(x, o) := {dp}x∈P(x,o). Each path p in P(x, o) corresponds to exactly one element dp in D(x, o) and vice-versa. And the elements dp of dp completely determine the product∏ p∈p̃ d −1 p . By using (11) and the fact that, by (S), the multiset D(x, o) is independent of x, we hence conclude ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 1 dp = ∑ x∈x ∑ dp∈D(x,o) ∏ dp∈dp 1 dp
= d ∑
dp∈D(x,o) ∏ dp∈dp 1 dp = 1 .
Now, let us relate these considerations on graphs to gradients and use assumptions (H). We remind that path-product ωp is the product ∏ p∈p̃ wp.
Lemma 10. Under assumptions (H), the path-products ωp, ωp′ of two distinct paths p and p′ starting from a same input node x, satisfy:
EW [ωp ωp′ ] = 0 and EW [ ω2p ] = ∏ p∈p̃ EW [ w2p ] .
Furthermore, if there is at least one non-average-pooling weight on path p, then EW [ωp] = 0.
Proof. Hypothesis H4 yields
EW [ ω2p ] = EW ∏ p∈p̃ w2p = ∏ p∈p̃ EW [ w2p ] .
Now, take two different paths p and p′ that start at a same node x. Starting from x, consider the first node after which p and p′ part and call p and p′ the next nodes on p and p′ respectively. Then the weights wp and wp′ are two weights of a same node. Applying H4 and H5 hence gives
EW [ωp ωp′ ] = EW [ ωp\p ωp′\p′ ] EW [wp wp′ ] = 0 .
Finally, if p has at least one non-average-pooling node p, then successively applying H4 and H3 yields: EW [ωp] = EW [ ωp\p ] EW [wp] = 0.
We now have all elements to prove Theorem 5.
Proof. (of Theorem 5) For a given neuron p in p̃, let p − 1 designate the previous node in p of p. Let σp (resp. σp) be a variable equal to 0 if neuron p gets killed by its ReLU (resp. path p is inactive), and 1 otherwise. Then:
∂xo = ∑
p∈P(x,o) ∏ p∈p̃ ∂p−1 p = ∑ p∈P(x,o) ωp σp
Consequently:
EW,σ [ (∂xo) 2 ] = ∑
p,p′∈P(x,o)
EW [ωp ωp′ ]Eσ [σpσp′ ]
= ∑
p∈P(x,o) ∏ p∈p̃ EW [ ω2p ] Eσ [ σ2p ]
(13)
= ∑
p∈P(x,o) ∏ p∈p̃ 2 dp 1 2 = 1 d ,
where the firs line uses the independence between the ReLU killings and the weights (H1), the second uses Lemma 10 and the last uses Lemma 9. The gradient ∂xo thus has coordinates whose squared expectations scale like 1/d. Thus each coordinate scales like 1/ √ d and ‖∂xo‖q like d1/2−1/q. Conclude on ‖∂xL‖q and p ‖∂xL‖q by using Step 2 of the proof of Theorem 4. Finally, note that, even without the symmetry assumption (S), using Lemma 8 shows that
EW [ ‖∂xo‖22 ] = ∑ x∈x EW [ (∂xo) 2 ]
= ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 2 dp 1 2 = 1 .
Thus, with or without (S), ‖∂xo‖2 is independent of the input-dimension d.
B.4 PROOF OF THEOREM 7
To prove Theorem 7, we will actually prove the following more general theorem, which generalizes Theorem 5. Theorem 7 is a straightforward corollary of it. Theorem 11. Consider any feed-forward network with linear connections and ReLU activation functions that outputs logits fk(x) and satisfies assumptions (H). Suppose that there is a fixed multiset of integers {a1, . . . , an} such that each path from input to output traverses exactly n average pooling nodes with degrees {a1, . . . , an}. Then:
‖∂xfk‖2 ∝ 1∏n
i=1
√ ai . (14)
Furthermore, if the net satisfies the symmetry assumption (S), then: |∂xfk| ∝ 1√ d ∏n i=1 ai .
Two remarks. First, in all this proof, “weight” encompasses both the standard random weights, and the constant (deterministic) weights equal to 1/(in-degree) of the average-poolings. Second, assumption H5 implies that the average-pooling nodes have disjoint input nodes: otherwise, there would be two non-zero deterministic weights w,w′ from a same neuron that would hence satisfy: EW [ww′] 6= 0.
Proof. As previously, let o designate any fixed output-logit fk(x). For any path p, let a be the set of average-pooling nodes of p and let q be the set of remaining nodes. Each path-product ωp satisfies: ωp = ωqωa, where ωa is a same fixed constant. For two distinct paths p,p′, Lemma 10 therefore yields: EW [ ω2p ] = ω2a EW [ ω2q ]
and EW [ωpωp′ ] = 0. Combining this with Lemma 9 and under assumption (S), we get similarly to (13):
EW,σ [ (∂xo) 2 ] = ∑
p,p′∈P(x,o)
ωaωa′ EW [ωq ωq′ ]Eσ [σqσq′ ]
= ∑
p∈P(x,o) n∏ i=1 1 a2i ∏ q∈q̃ EW [ ω2q ] Eσ [ σ2q ]
= n∏ i=1 1
ai︸ ︷︷ ︸ same value
for all p
∑ p∈P(x,o) n∏ i=1 1 ai ∏ q∈q̃ 2 dq 1
2︸ ︷︷ ︸∏ p∈p̃
1 dp︸ ︷︷ ︸
= 1d (Lemma 9)
(15)
= 1
d n∏ i=1 1 ai .
Therefore, |∂xo| = |∂xfk| ∝ 1/ √ d ∏n i=1 ai. Again, note that, even without assumption (S), using (15) and Lemma 8 shows that
EW [ ‖∂xo‖22 ] = ∑ x∈x EW,σ [ (∂xo) 2 ]
(15) = ∑ x∈x n∏ i=1 1 ai ∑ p∈P(x,o) n∏ i=1 1 ai ∏ p∈p̃ 2 dp 1 2
= n∏ i=1 1 ai ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 1
dp︸ ︷︷ ︸ =1 (Lemma 8)
= n∏ i=1 1 ai ,
which proves (14).
C COMPARISON TO THE CROSS-LIPSCHITZ REGULARIZER
In their Theorem 2.1, Hein & Andriushchenko (2017) show that the minimal = ‖δ‖p perturbation to fool the classifier must be bigger than:
min k 6=c fc(x)− fk(x) maxy∈B(x, ) ‖∂xfc(y)− ∂xfk(y)‖q . (16)
They argue that the training procedure typically already tries to maximize fc(x)− fk(x), thus one only needs to additionally ensure that ‖∂xfc(x)− ∂xfk(x)‖q is small. They then introduce what they call a Cross-Lipschitz Regularization, which corresponds to the case p = 2 and involves the gradient differences between all classes:
RxLip := 1
K2 K∑ k,h=1 ‖∂xfh(x)− ∂xfk(x)‖22 (17)
In contrast, using (10), (the square of) our proposed regularizer ‖∂xL‖q from (4) can be rewritten, for p = q = 2 as:
R‖·‖2(f) = K∑
k,h=1
qk(x)qh(x) ( ∂xfc(x)− ∂xfk(x) ) ·
· ( ∂xfc(x)− ∂xfh(x) ) (18)
Although both (17) and (18) consist in K2 terms, corresponding to the K2 cross-interaction between the K classes, the big difference is that while in (17) all classes play exactly the same role, in (18) the summands all refer to the target class c in at least two different ways. First, all gradient differences are always taken with respect to ∂xfc. Second, each summand is weighted by the probabilities qk(x) and qh(x) of the two involved classes, meaning that only the classes with a non-negligible probability get their gradient regularized. This reflects the idea that only points near the margin need a gradient regularization, which incidentally will make the margin sharper.
D PERCEPTION THRESHOLD
To keep the average pixel-wise variation constant across dimensions d, we saw in (3) that the threshold p of an `p-attack should scale like d1/p. We will now see another justification for this scaling. Contrary to the rest of this work, where we use a fixed p for all images x, here we will let p depend on the `2-norm of x. If, as usual, the dataset is normalized such that the pixels have on average variance 1, both approaches are almost equivalent.
Suppose that given an `p-attack norm, we want to choose p such that the signal-to-noise ratio (SNR) ‖x‖2 / ‖δ‖2 of a perturbation δ with `p-norm ≤ p is never greater than a given SNR threshold 1/ . For p = 2 this imposes 2 = ‖x‖2. More generally, studying the inclusion of `p-balls in `2-balls yields p = ‖x‖2 d1/p−1/2 . (19) Note that this gives again p = ∞d1/p. This explains how to adjust the threshold with varying `p-attack norm.
Now, let us see how to adjust the threshold of a given `p-norm when the dimension d varies. Suppose that x is a natural image and that decreasing its dimension means either decreasing its resolution or cropping it. Because the statistics of natural images are approximately resolution and scale invariant (Huang, 2000), in either case the average squared value of the image pixels remains unchanged, which implies that ‖x‖2 scales like √ d. Pasting this back into (19), we again get:
p = ∞ d 1/p .
In particular, ∞ ∝ is a dimension-free number, exactly like in (3) of the main part. Now, why did we choose the SNR as our invariant reference quantity and not anything else? One reason is that it corresponds to a physical power ratio between the image and the perturbation, which we think the human eye is sensible to. Of course, the eye’s sensitivity also depends on the spectral frequency of the signals involved, but we are only interested in orders of magnitude here.
Another point: any image x yields an adversarial perturbation δx, where by constraint ‖x‖2 / ‖δx‖ ≤ 1/ . For `2-attacks, this inequality is actually an equality. But what about other `p-attacks: (on average over x,) how far is the signal-to-noise ratio from its imposed upper bound 1/ ? For p 6∈ {1, 2,∞}, the answer unfortunately depends on the pixel-statistics of the images. But when p is 1 or∞, then the situation is locally the same as for p = 2. Specifically: Lemma 12. Let x be a given input and > 0. Let p be the greatest threshold such that for any δ with ‖δ‖p ≤ p, the SNR ‖x‖2 / ‖δ‖2 is ≤ 1/ . Then p = ‖x‖2 d1/p−1/2. Moreover, for p ∈ {1, 2,∞}, if δx is the p-sized `p-attack that locally maximizes the loss-increase i.e. δx = arg max‖δ‖p≤ p |∂xL · δ|, then:
SNR(x) := ‖x‖2 ‖δx‖2 = 1 and Ex [SNR(x)] = 1 .
Proof. The first paragraph follows from the fact that the greatest `p-ball included in an `2-ball of radius ‖x‖2 has radius ‖x‖2 d1/p−1/2.
The second paragraph is clear for p = 2. For p =∞, it follows from the fact that δx = ∞ sign∂xL which satisfies: ‖δx‖2 = ∞ √ d = ‖x‖2. For p = 1, it is because δx = 1 maxi=1..d |(∂xL)i|,
which satisfies: ‖δx‖2 = 2/ √ d = ‖x‖2.
Intuitively, this means that for p ∈ {1, 2,∞}, the SNR of p-sized `p-attacks on any input x will be exactly equal to its fixed upper limit 1/ . And in particular, the mean SNR over samples x is the same (1/ ) in all three cases.
E VULNERABILITY-DIMENSION DEPENDENCE USING DOWNSIZED IMAGENET IMAGES
We also ran a similar experiment as in Section 4.2, but instead of using upsampled CIFAR10 images, we created a 12-class dataset of approximately 80, 000 3 × 256 × 256-sized RGBimages by merging similar ImageNet-classes, resizing the smallest image-edge to 256 pixels and
center-cropping the result. We then downsized the images to 32, 64, 128 and 256 pixels per edge, and trained, not 1, but 10 CNNs per image-size. We then computed their adversarial vulnerability and average ‖∂xL‖1. This gave us 2 values per trained net, i.e. 2 x 10 values per image-size, which are shown in 6. The lines follow their medians, the errorbars show their 10th and 90th quantiles. The conclusions are identical to Section 4.2: after usual training, the vulnerability and gradient-norms still increase like√
d. Note that, as the gradients get much larger at higher dimensions, the first order approximation in (2) becomes less and less valid, which explains the little inflection of the adversarial vulnerability curve. For smaller -thresholds, we verified that the inflection disappears.
F FIGURES WITH AN `2 PERTURBATION-THRESHOLD AND DEEP-FOOL ATTACKS
Here we plot the same curves as in the main part, but using an `2-attack threshold of size 2 = 0.005 √ d instead of the `∞-threshold and deep-fool attacks (Moosavi-Dezfooli et al., 2016) instead of iterative `∞-ones in Figs. 8 and 9. Note that contrary to `∞-thresholds, `2-thresholds must be rescaled by √ d to stay consistent across dimensions (see Eq.3 and Appendix D). All curves look essentially the same as their counterparts in the main text.
G A VARIANT OF ADVERSARIALLY-AUGMENTED TRAINING
In usual adversarially-augmented training, the adversarial image x+ δ is generated on the fly, but is nevertheless treated as a fixed input of the neural net, which means that the gradient does not get backpropagated through δ. This need not be. As δ is itself a function of x, the gradients could actually also be backpropagated through δ. As it was only a one-line change of our code, we used this opportunity to test this variant of adversarial training (FGSM-variant in Figure 2) and thank Martín Arjovsky for suggesting it. But except for an increased computation time, we found no significant difference compared to usual augmented training. | 1. What is the focus of the paper regarding adversarial examples?
2. What are the strengths of the proposed approach, particularly in its theoretical explanations and empirical evidence?
3. Are there any concerns or suggestions regarding the paper's analysis, especially concerning weight distribution and potential defenses against adversarial attacks? | Review | Review
The authors provide a compelling theoretical explanation for a large class of adversarial examples. While this explanation (rooted in the norm of gradients of neural networks being the culprit for the existence of adversarial examples) is not new, they unify several old perspectives, and convincingly argue for genuinely new scaling relationships (i.e. \sqrt(d) versus linear in d scaling of sensitivity to adversarial perturbations versus input size). They prove a number of theorems relating these scaling relationships to a broad swathe of relevant model architectures, and provide thorough empirical evidence of their work.
I can honestly find very little to complain about in this work--the prose is clear, and the proofs are correct as far as I can tell (though I found Figure 4 in the appendix (left panel) to not be hugely compelling. More data here would be great!)
As much of the analysis hinges on the particularities of the weight distribution at initialization, could the authors comment on possible defenses to adversarial attack by altering this weight distribution? (By, for example, imposing that the average value must grow like 1/d)? |
ICLR | Title
Adversarial Vulnerability of Neural Networks Increases with Input Dimension
Abstract
Over the past four years, neural networks have been proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. For most current network architectures, we prove that the `1-norm of these gradients grows as the square root of the input size. These nets therefore become increasingly vulnerable with growing image size. Our proofs rely on the network’s weight distribution at initialization, but extensive experiments confirm that our conclusions still hold after usual training.
1 INTRODUCTION
Following the work of Goodfellow et al. (2015), Convolutional Neural Networks (CNNs) have been found vulnerable to adversarial examples: an adversary can drive the performance of state-of-the art CNNs down to chance level with imperceptible changes of the inputs. A number of studies have tried to address this issue, but only few have stressed that, because adversarial examples are essentially small input changes that create large output variations, they are inherently caused by large gradients of the neural network with respect to its inputs. Of course, this view, which we will focus on here, assumes that the network and loss are differentiable. It has the advantage to yield a large body of specific mathematical tools, but might not be easily extendable to masked gradients, non-smooth models or the 0-1-loss. Nevertheless, our conclusions might even hold for non-smooth models, given that the latter can often be viewed as smooth at a coarser level.
Contributions. More specifically, we provide theoretical and empirical arguments supporting the existence of a monotonic relationship between the gradient norm of the training objective (of a differentiable classifier) and its adversarial vulnerability. Evaluating this norm based on the weight statistics at initialization, we show that CNNs and most feed-forward networks, by design, exhibit increasingly large gradients with input dimension d, almost independently of their architecture. That leaves them increasingly vulnerable to adversarial noise. We corroborate our theoretical results by extensive experiments. Although some of those experiments involve adversarial regularization schemes, our goal is not to advocate a new adversarial defense (these schemes are already known), but to show how their effect can be explained by our first order analysis. We do not claim to explain all aspects of adversarial vulnerability, but we claim that our first order argument suffices to explain a significant part of the empirical findings on adversarial vulnerability. This calls for researching the design of neural network architectures with inherently smaller gradients and provides useful guidelines to practitioners and network designers.
2 FROM ADVERSARIAL EXAMPLES TO LARGE GRADIENTS
Suppose that a given classifier ϕ classifies an image x as being in category ϕ(x). An adversarial image is a small modification of x, barely noticeable to the human eye, that suffices to fool the classifier into predicting a class different from ϕ(x). It is a small perturbation of the inputs, that creates a large variation of outputs. Adversarial examples thus seem inherently related to large gradients of the network. A connection, that we will now clarify. Note that visible adversarial examples sometimes appear in the literature, but we deliberately focus on imperceptible ones.
Adversarial vulnerability and adversarial damage. In practice, an adversarial image is constructed by adding a perturbation δ to the original image x such that ‖δ‖ ≤ for some (small) number and a given norm ‖·‖ over the input space. We call the perturbed input x+ δ an -sized ‖·‖-attack and say that the attack was successful when ϕ(x+ δ) 6= ϕ(x). This motivates Definition 1. Given a distribution P over the input-space, we call adversarial vulnerability of a classifier ϕ to an -sized ‖·‖-attack the probability that there exists a perturbation δ of x such that
‖δ‖ ≤ and ϕ(x) 6= ϕ(x+ δ) . (1)
We call the average increase-after-attack Ex∼P [∆L] of a loss L the (L-) adversarial damage (of the classifier ϕ to an -sized ‖·‖-attack).
When L is the 0-1-loss L0/1, adversarial damage is the accuracy-drop after attack. The 0-1-loss damage is always smaller than adversarial vulnerability, because vulnerability counts all class-changes of ϕ(x), whereas some of them may be neutral to adversarial damage (e.g. a change between two wrong classes). The L0/1-adversarial damage thus lower bounds adversarial vulnerability. Both are even equal when the classifier is perfect (before attack), because then every change of label introduces an error. It is hence tempting to evaluate adversarial vulnerability with L0/1-adversarial damage.
From ∆L0/1 to ∆L and to ∂xL. In practice however, we do not train our classifiers with the non-differentiable 0-1-loss but use a smoother loss L, such as the cross-entropy loss. For similar reasons, we will now investigate the adversarial damage Ex [∆L(x, c)] with loss L rather than L0/1. Like for Goodfellow et al. (2015); Lyu et al. (2015); Sinha et al. (2018) and many others, a classifier ϕ will hence be robust if, on average over x, a small adversarial perturbation δ of x creates only a small variation δL of the loss. Now, if ‖δ‖ ≤ , then a first order Taylor expansion in shows that
δL = max δ : ‖δ‖≤ |L(x+ δ, c)− L(x, c)| ≈ max δ : ‖δ‖≤ |∂xL · δ| = |||∂xL|||, (2)
where ∂xL denotes the gradient of L with respect to x, and where the last equality stems from the definition of the dual norm |||·||| of ‖·‖. Now two remarks. First: the dual norm only kicks in because we let the input noise δ optimally adjust to the coordinates of ∂xL within its -constraint. This is the brand mark of adversarial noise: the different coordinates add up, instead of statistically canceling each other out as they would with random noise. For example, if we impose that ‖δ‖2 ≤ , then δ will strictly align with ∂xL. If instead ‖δ‖∞ ≤ , then δ will align with the sign of the coordinates of ∂xL. Second remark: while the Taylor expansion in (2) becomes exact for infinitesimal perturbations, for finite ones it may actually be dominated by higher-order terms. Our experiments (Figures 1 & 2) however strongly suggest that in practice the first order term dominates the others. Now, remembering that the dual norm of an `p-norm is the corresponding `q-norm, and summarizing, we have proven Lemma 2. At first order approximation in , an -sized adversarial attack generated with norm ‖·‖ increases the loss L at point x by |||∂xL|||, where |||·||| is the dual norm of ‖·‖. In particular, an -sized `p-attack increases the loss by ‖∂xL‖q where 1 ≤ p ≤ ∞ and 1p + 1q = 1.
Consequently, the adversarial damage of a classifier with lossL to -sized attacks generated with norm ‖·‖ is Ex|||∂xL|||. This is valid only at first order, but it proves that at least this kind of first-order vulnerability is present. We will see that the first-order predictions closely match the experiments, and that this insight helps protecting even against iterative (non-first-order) attack methods (Figure 1).
Calibrating the threshold to the attack-norm ‖·‖. Lemma 2 shows that adversarial vulnerability depends on three main factors: (i) ‖·‖ , the norm chosen for the attack (ii) , the size of the attack, and (iii) Ex|||∂xL||| , the expected dual norm of ∂xL. We could see Point (i) as a measure of our sensibility to image perturbations, (ii) as our sensibility threshold, and (iii) as the classifier’s expected marginal sensibility to a unit perturbation. Ex|||∂xL||| hence intuitively captures the discrepancy between our perception (as modeled by ‖·‖) and the classifier’s perception for an input-perturbation of small size . Of course, this viewpoint supposes that we actually found a norm ‖·‖ (or more generally a metric) that faithfully reflects human perception – a project in its own right, far beyond the scope of this paper. However, it is clear that the threshold that we choose should depend on the norm ‖·‖ and hence on the input-dimension d. In particular, for a given pixel-wise order of magnitude of the perturbations δ, the `p-norm of the perturbation will scale like d1/p. This suggests to write the threshold p used with `p-attacks as:
p = ∞ d 1/p , (3)
where ∞ denotes a dimension-independent constant. In Appendix D we show that this scaling also preserves the average signal-to-noise ratio ‖x‖2 / ‖δ‖2, both across norms and dimensions, so that p could correspond to a constant human perception-threshold. With this in mind, the impatient reader may already jump to Section 3, which contains our main contributions: the estimation of Ex‖∂xL‖q for standard feed-forward nets. Meanwhile, the rest of this section shortly discusses two straightforward defenses that we will use later and that further illustrate the role of gradients.
A new old regularizer. Lemma 2 shows that the loss of the network after an 2 -sized ‖·‖-attack is
L ,|||·|||(x, c) := L(x, c) + 2 |||∂xL||| . (4)
It is thus natural to take this loss-after-attack as a new training objective. Here we introduced a factor 2 for reasons that will become clear in a moment. Incidentally, for ‖·‖ = ‖·‖2, this new loss reduces to an old regularization-scheme proposed by Drucker & LeCun (1991) called double-backpropagation. At the time, the authors argued that slightly decreasing a function’s or a classifier’s sensitivity to input perturbations should improve generalization. In a sense, this is exactly our motivation when defending against adversarial examples. It is thus not surprising to end up with the same regularization term. Note that our reasoning only shows that training with one specific norm |||·||| in (4) helps to protect against adversarial examples generated from ‖·‖. A priori, we do not know what will happen for attacks generated with other norms; but our experiments suggest that training with one norm also protects against other attacks (see Figure 2 and Section 4.1).
Link to adversarially-augmented training. In (1), designates an attack-size threshold, while in (4), it is a regularization-strength. Rather than a notation conflict, this reflects an intrinsic duality between two complementary interpretations of , which we now investigate further. Suppose that, instead of using the loss-after-attack, we augment our training set with -sized ‖·‖-attacks x + δ, where for each training point x, the perturbation δ is generated on the fly to locally maximize the loss-increase. Then we are effectively training with
L̃ ,‖·‖(x, c) := 1
2 (L(x, c) + L(x+ δ, c)) , (5)
where by construction δ satisfies (2). We will refer to this technique as adversarially augmented training. It was first introduced by Goodfellow et al. (2015) with ‖·‖ = ‖·‖∞ under the name of FGSM1-augmented training. Using the first order Taylor expansion in of (2), this ‘old-plus-postattack’ loss of (5) simply reduces to our loss-after-attack, which proves
Proposition 3. Up to first-order approximations in , L̃ ,‖·‖ = L ,|||·||| . Said differently, for small enough , adversarially-augmented training with -sized ‖·‖-attacks amounts to penalizing the dual norm |||·||| of ∂xL with weight /2. In particular, double-backpropagation corresponds to training with `2-attacks, while FGSM-augmented training corresponds to an `1-penalty on ∂xL.
This correspondence between training with perturbations and using a regularizer can be compared to Tikhonov regularization: Tikhonov regularization amounts to training with random noise Bishop (1995), while training with adversarial noise amounts to penalizing ∂xL. Section 4.1 verifies the correspondence between adversarial augmentation and gradient regularization empirically, which also strongly suggests the empirical validity of the first-order Taylor expansion in (2).
3 ESTIMATING ‖∂xL‖q TO EVALUATE ADVERSARIAL VULNERABILITY
In this section, we evaluate the size of ‖∂xL‖q for standard neural network architectures. We start with fully-connected networks, and finish with a much more general theorem that, not only encompasses CNNs (with or without strided convolutions), but also shows that the gradient-norms are essentially independent of the network topology. We start our analysis by showing how changing q affects the size of ‖∂xL‖q . Suppose for a moment that the coordinates of ∂xL have typical magnitude |∂xL|. Then ‖∂xL‖q scales like d1/q|∂xL|. Consequently
p ‖∂xL‖q ∝ p d1/q |∂xL| ∝ d |∂xL| . (6) 1FGSM = Fast Gradient Sign Method
This equation carries two important messages. First, we see how ‖∂xL‖q depends on d and q. The dependence seems highest for q = 1. But once we account for the varying perceptibility threshold p ∝ d1/p, we see that adversarial vulnerability scales like d · |∂xL|, whatever `p-norm we use. Second, (6) shows that to be robust against any type of `p-attack at any input-dimension d, the average absolute value of the coefficients of ∂xL must grow slower than 1/d. Now, here is the catch, which brings us to our core insight.
3.1 CORE IDEA: ONE NEURON WITH MANY INPUTS
In order to preserve the activation variance of the neurons from layer to layer, the neural weights are usually initialized with a variance that is inversely proportional to the number of inputs per neuron. Imagine for a moment that the network consisted only of one output neuron o linearly connected to all input pixels. For the purpose of this example, we assimilate o and L. Because we initialize the weights with a variance of 1/d, their average absolute value |∂xo| ≡ |∂xL| grows like 1/ √ d, rather than the required 1/d. By (6), the adversarial vulnerability ‖∂xo‖q ≡ ‖∂xL‖q therefore increases like d/ √ d = √ d.
This toy example shows that the standard initialization scheme, which preserves the variance from layer to layer, causes the average coordinate-size |∂xL| to grow like 1/ √ d instead of 1/d. When an `∞-attack tweaks its -sized input-perturbations to align with the coordinate-signs of ∂xL, all coordinates of ∂xL add up in absolute value, resulting in an output-perturbation that scales like √ d and leaves the network increasingly vulnerable with growing input-dimension.
3.2 GENERALIZATION TO DEEP NETWORKS
Our next theorems generalize the previous toy example to a very wide class of feedforward nets with ReLU activation functions. For illustration purposes, we start with fully connected nets and only then proceed to the broader class, which includes any succession of (possibly strided) convolutional layers. In essence, the proofs iterate our insight on one layer over a sequence of layers. They all rely on the following set (H) of hypotheses:
H1 Non-input neurons are followed by a ReLU killing half of its inputs, independently of the weights. H2 Neurons are partitioned into layers, meaning groups that each path traverses at most once. H3 All weights have 0 expectation and variance 2/(in-degree) (‘He-initialization’). H4 The weights from different layers are independent. H5 Two distinct weights w,w′ from a same node satisfy E [ww′] = 0.
If we follow common practice and initialize our nets as proposed by He et al. (2015), then H3-H5 are satisfied at initialization by design, while H1 is usually a very good approximation (Balduzzi et al., 2017). Note that such i.i.d. weight assumptions have been widely used to analyze neural nets and are at the heart of very influential and successful prior work (e.g., equivalence between neural nets and Gaussian processes as pioneered by Neal 1996). Nevertheless, they do not hold after training. That is why all our statements in this section are to be understood as orders of magnitudes that are very well satisfied at initialization in theory and in practice, and that we will confirm experimentally after training in Section 4. Said differently, while our theorems rely on the statistics of neural nets at initialization, our experiments confirm their conclusions after training.
Theorem 4 (Vulnerability of Fully Connected Nets). Consider a succession of fully connected layers with ReLU activations which takes inputs x of dimension d, satisfies assumptions (H), and outputs logits fk(x) that get fed to a final cross-entropy-loss layer L. Then the coordinates of ∂xfk grow like 1/ √ d, and
‖∂xL‖q ∝ d 1 q− 1 2 and p ‖∂xL‖q ∝
√ d . (7)
These networks are thus increasingly vulnerable to `p-attacks with growing input-dimension.
Theorem 4 is a special case of the next theorem, which will show that the previous conclusions are essentially independent of the network-topology. We will use the following symmetry assumption on the neural connections. For a given path p, let the path-degree dp be the multiset of encountered in-degrees along path p. For a fully connected network, this is the unordered sequence of layer-sizes
preceding the last path-node, including the input-layer. Now consider the multiset {dp}p∈P(x,o) of all path-degrees when p varies among all paths from input x to output o. The symmetry assumption (relatively to o) is
(S) All input nodes x have the same multiset {dp}p∈P(x,o) of path-degrees from x to o. Intuitively, this means that the statistics of degrees encountered along paths to the output are the same for all input nodes. This symmetry assumption is exactly satisfied by fully connected nets, almost satisfied by CNNs (up to boundary effects, which can be alleviated via periodic or mirror padding) and exactly satisfied by strided layers, if the layer-size is a multiple of the stride.
Theorem 5 (Vulnerability of Feedforward Nets). Consider any feed-forward network with linear connections and ReLU activation functions. Assume the net satisfies assumptions (H) and outputs logits fk(x) that get fed to the cross-entropy-loss L. Then ‖∂xfk‖2 is independent of the input dimension d and 2 ‖∂xL‖2 ∝ √ d. Moreover, if the net satisfies the symmetry assumption (S), then
|∂xfk| ∝ 1/ √ d and (7) still holds: ‖∂xL‖q ∝ d 1 q− 1 2 and p ‖∂xL‖q ∝ √ d.
Theorems 4 and 5 are proven in Appendix B. The main proof idea is that in the gradient norm computation, the He-initialization exactly compensates the combinatorics of the number of paths in the network, so that this norm becomes independent of the network topology. In particular, we get
Corollary 6 (Vulnerability of CNNs). In any succession of convolution and dense layers, strided or not, with ReLU activations, that satisfies assumptions (H) and outputs logits that get fed to the cross-entropy-loss L, the gradient of the logit-coordinates scale like 1/ √ d and (7) is satisfied. It is hence increasingly vulnerable with growing input-resolution to attacks generated with any `p-norm.
Appendix A shows that the network gradient are dampened when replacing strided layers by average poolings, essentially because average-pooling weights do not follow the He-init assumption H3.
4 EMPIRICAL RESULTS
In Section 4.1, we empirically verify the validity of the first-order Taylor approximation made in (2) (Fig.1), for example by checking the correspondence between loss-gradient regularization and adversarially-augmented training (Fig.2). Section 4.2 then empirically verifies that both the average `1-norm of ∂xL and the adversarial vulnerability grow like √ d as predicted by Corollary 6. For all experiments, we approximate adversarial vulnerability using various attacks of the Foolboxpackage (Rauber et al., 2017). We use an `∞ attack-threshold of size ∞ = 0.005 (and later 0.002) which, for pixel-values ranging from 0 to 1, is completely imperceptible but suffices to fool the classifiers on a significant proportion of examples. This ∞-threshold should not be confused with the regularization-strengths appearing in (4) and (5), which will be varied in some experiments.
4.1 FIRST-ORDER APPROXIMATION, GRADIENT PENALTY, ADVERSARIAL AUGMENTATION
We train several CNNs with same architecture to classify CIFAR-10 images (Krizhevsky, 2009). For each net, we use a specific training method with a specific regularization value . The training methods used were `1- and `2-penalization of ∂xL (Eq. 4), adversarial augmentation with `∞- and `2- attacks (Eq. 5), projected gradient descent (PGD) with randomized starts (7 steps per attack with step-size = .2 ∞; see Madry et al. 2018) and the cross-Lipschitz regularizer (Eq. 17 in Appendix C). We then test the adversarial vulnerability of each trained network using the following attack-methods: single-step `∞- (FGSM) and `2-attacks, iterative `∞- (PGD) and `2-attacks, and DeepFool attacks (Moosavi-Dezfooli et al., 2016). All networks have 6 ‘strided convolution→ batchnorm→ ReLU’ layers with strides [1, 2, 2, 2, 2, 2] respectively and 64 output-channels each, followed by a final fully-connected linear layer. Results are summarized in Figures 1 and 2. Figure 1 fixes the training method – gradient `1-regularization – and plots the obtained adversarial vulnerabilities for various attacks types. Figure 2 fixes the attack type – iterative `∞-attacks – but plots the curves obtained for various training methods. Note that our goal here is not to advocate one defense over another, but rather to check the validity of the Taylor expansion, and empirically verify that first order terms (i.e., gradients) suffice to explain much of the observed adversarial vulnerability. Similarly, our goal in testing several attacks (Figure 1) is not to present a specifically strong one, but rather to verify that for all attacks, the trends are the same: the vulnerability grows with increasing gradients.
Validity of first order expansion. The following observations support the validity of the first order Taylor expansion in (2) and suggest that it is a crucial component of adversarial vulnerability: (i) the efficiency of the first-order defense against iterative (non-first-order) attacks (Fig.1a); (ii) the striking similarity between the PGD curves (adversarial augmentation with iterative attacks) and the other adversarial training training curves (one-step attacks/defenses); (iii) the functional-like dependence between any approximation of adversarial vulnerability and Ex‖∂xL‖1 (Fig.1b), and its independence on the training method (Fig.2d). (iv) the excellent correspondence between the gradient-regularization and adversarial training curves (see next paragraph). Said differently, adversarial examples seem indeed to be primarily caused by large gradients of the classifier as captured via the induced loss. 2
2On Figure 1, the two `∞-attacks seem more efficient than the others, because we chose an `∞ perturbation threshold ( ∞). With an `2-threshold it is the opposite (see Figure 7, Appendix F).
Illustration of Proposition 3. The upper row of Figure 2 plots Ex‖∂xL1‖, adversarial vulnerability and accuracy as a function of d1/p. The excellent match between the adversarial augmentation curve with p = ∞ (p = 2) and its gradient-regularization dual counterpart with q = 1 (resp. q = 2) illustrates the duality between as a threshold for adversarially-augmented training and as a regularization constant in the regularized loss (Proposition 3). It also supports the validity of the first-order Taylor expansion in (2).
Confirmation of (3). Still on the upper row, the curves for p =∞, q = 1 have no reason to match those for p = q = 2 when plotted against , because -threshold is relative to a specific attack-norm. However, (3) suggested that the rescaled thresholds d1/p may approximately correspond to a same ‘threshold-unit’ across `p-norms and across dimension. This is well confirmed by the upper row plots: by rescaling the x-axis, the p = q = 2 and q = 1, p =∞ curves get almost super-imposed. Accuracy-vs-Vulnerability Trade-Off. Merging Figures 2b and 2c by taking out , Figure 2f shows that all gradient regularization and adversarial training methods yield equivalent accuracyvulnerability trade-offs. Incidentally, for higher penalization values, these trade-offs appear to be much better than those given by cross Lipschitz regularization.
The penalty-norm does not matter. We were surprised to see that on Figures 2d and 2f, the L ,q curves are almost identical for q = 1 and 2. This indicates that both norms can be used interchangeably in (4) (modulo proper rescaling of via (3)), and suggests that protecting against a specific attacknorm also protects against others. (6) may provide an explanation: if the coordinates of ∂xL behave like centered, uncorrelated variables with equal variance –which follows from assumptions (H) –, then the `1- and `2-norms of ∂xL are simply proportional. Plotting Ex‖∂xL(x)‖2 against Ex‖∂xL(x)‖1 in Figure 2e confirms this explanation. The slope is independent of the training method. Therefore, penalizing ‖∂xL(x)‖1 during training will not only decrease Ex‖∂xL‖1 (as shown in Figure 2a), but also drive down Ex‖∂xL‖2 and vice-versa.
4.2 VULNERABILITY GROWS WITH INPUT RESOLUTION
Theorems 4-5 and Corollary 6 predict a linear growth of the average `1-norm of ∂xL with the square root of the input dimension d, and therefore also of adversarial vulnerability (Lemma 2). To test these predictions, we upsampled the CIFAR-10 images (of size 3 x 32 x 32) by copying pixels so as to get 4 datasets with, respectively, 32, 64, 128 and 256 pixels per edge. We then trained a CNN on each dataset
and computed their adversarial vulnerability (with iterative `∞-attacks, threshold ∞ = .002) and average ‖∂xL‖1 over the last 20 epochs on the same held-out test-dataset. This gave us 2 x 20-values per net and imagesize, summarized in Figure 3. The dashed-lines follow their medians and the errorbars show their 10th and 90th quantiles. As predicted by our theorems, both ‖∂xL‖1 and adversarial vulnerability grow approximately linearly with √ d. We also ran a similar experiment on downsized ImageNet images, where we train several identical nets per image-size rather than just one. Conclusions are unchanged. See Appendix E.
All networks had exactly the same amount of parameters and very similar structure across the various input-resolutions. The CNNs were a succession of 8 ‘convolution→ batchnorm→ ReLU’ layers with 64 output channels, followed by a final full-connection to the 12 logit-outputs. We used 2× 2- max-poolings after the convolutions of layers 2,4, 6 and 8, and a final max-pooling after layer 8 that fed only 1 neuron per channel to the fully-connected layer. To ensure that the convolution-kernels cover similar ranges of the images across each of the 32, 64, 128 and 256 input-resolutions, we respectively dilated all convolutions (‘à trous’) by a factor 1, 2, 4 and 8.
5 DISCUSSIONS
5.1 IMPLICATIONS: WHY PRIOR VULNERABILITY MAY MATTER
Our theoretical results show that the priors of classical neural networks yield vulnerable functions because of naturally high gradients. And our experiments (Fig 3&6) suggest that usual training does not escape these prior properties. But how may these insights help understanding the vulnerability of robustly trained networks? Clearly, to be successful, robust training algorithms must escape ill-behaved priors, which explains why most methods (e.g. FGSM, PGD) are essentially gradient penalization techniques. But, MNIST aside, even state-of-the-art methods largely fail at protecting current network architectures (Madry et al., 2018), and understanding why is motivation to this and many other papers. Interestingly, Schmidt et al. (2018) recently noticed that those methods actually do protect the nets on training examples, but fail to generalize to the test set. They hence conclude that state-of-the-art robustification algorithms work, but need more data. Alternatively however, when generalization fails, one can also reduce the model’s complexity. Large fully connected nets for example typically fail to generalize to out-of-sample examples: getting similar accuracies than CNNs would need prohibitively many training points. Similarly, Schmidt et al.’s observations may suggest that, outside the training points, networks tend to recover their prior properties, i.e. naturally large gradients. Figure 4 corroborates this hypothesis. It plots the evolution over training epochs of the `1-gradient-norms of the CNNs from Section 4.2 (Fig 3) on the training and test sets respectively. The discrepancy is unmistakable: after a brief initialization phase, the norms decrease on the training set, but increase on the test set. They are moreover almost input-dimension independent on the training set, but scale as √ d on the test set (as seen in Fig 3) up to respectively 2, 4, 8 and 16 times the training set values. These observations suggest that, with the current amount of data, tackling adversarial vulnerability may require new architectures with inherently smaller gradients. Searching these architectures among those with well-behaved prior-gradients seems a reasonable start, where our theoretical results may prove very useful.3
5.2 RELATED LITERATURE
On network vulnerability. Goodfellow et al. (2015) already stressed that adversarial vulnerability increases with growing dimension d. But their argument only relied on a linear ‘one-output-to-manyinputs’-model with dimension-independent weights. They therefore concluded on a linear growth of adversarial vulnerability with d. In contrast, our theory applies to almost any standard feed-forward architecture (not just linear), and shows that, once we adjust for the weight’s dimension-dependence, adversarial vulnerability increases like √ d (not d), almost independently of the architecture. Nevertheless, our experiments confirm Goodfellow et al.’s idea that our networks are “too linear-like”, in the sense that a first-order Taylor expansion is indeed sufficient to explain the adversarial vulnerability of neural networks. As suggested by the one-output-to-many-inputs model, the culprit is that growing
3Appendix A investigates such a preliminary direction by introducing average poolings, which have a weight-size 1/in−channels rather than the typical 1/√in−channels of the other He-initialized weights.
dimensionality gives the adversary more and more room to ‘wriggle around’ with the noise and adjust to the gradient of the output neuron. This wriggling, we show, is still possible when the output is connected to all inputs only indirectly, even when no neuron is directly connected to all inputs, like in CNNs. This explanation of adversarial vulnerability is independent of the intrinsic dimensionality or geometry of the data (compare to Amsaleg et al. 2017; Gilmer et al. 2018). Finally, let us mention that Fawzi et al. (2016) show a close link between the vulnerability to small worst-case perturbation (as studied here) and larger average perturbations. Our findings on the adversarial vulnerability NNs to small perturbation could thus be translated accordingly.
On robustification algorithms. Incidentally, Goodfellow et al. (2015) also already relate adversarial vulnerability to large gradients of the loss L, an insight at the very heart of their FGSM-algorithm. They however do not propose any explicit penalizer on the gradient of L other than indirectly through adversarially-augmented training. Conversely, Ross & Doshi-Velez (2018) propose the old double-backpropagation to robustify networks but make no connection to FGSM and adversarial augmentation. Lyu et al. (2015) discuss and use the connection between gradient-penalties and adversarial augmentation, but never actually compare both in experiments. This comparison however is essential to test the validity of the first-order Taylor expansion in (2), as confirmed by the similarity between the gradient-regularization and adversarial-augmentation curves in Figure 2. Hein & Andriushchenko (2017) derived yet another gradient-based penalty –the cross-Lipschitz-penalty– by considering (and proving) formal guarantees on adversarial vulnerability itself, rather than adversarial damage. While both penalties are similar in spirit, focusing on the adversarial damage rather than vulnerability has two main advantages. First, it achieves better accuracy-to-vulnerability ratios, both in theory and practice, because it ignores class-switches between misclassified examples and penalizes only those that reduce the accuracy. Second, it allows to deal with one number only, ∆L, whereas Hein & Andriushchenko’s cross-Lipschitz regularizer and theoretical guarantees explicitly involve all K logit-functions (and their gradients). See Appendix C. Penalizing network-gradients is also at the heart of contractive auto-encoders as proposed by Rifai et al. (2011), where it is used to regularize the encoder-features. Seeing adversarial training as a generalization method, let us also mention Hochreiter & Schmidhuber (1995), who propose to enhance generalization by searching for parameters in a “flat minimum region” of the loss. This leads to a penalty involving the gradient of the loss, but taken with respect to the weights, rather than the inputs. In the same vein, a gradientregularization of the loss of generative models also appears in Proposition 6 of Ollivier (2014), where it stems from a code-length bound on the data (minimum description length). More generally, the gradient regularized objective (4) is essentially the first-order approximation of the robust training objective max‖δ‖≤ L(x+ δ, c) which has a long history in math (Wald, 1945), machine learning (Xu et al., 2009) and now adversarial vulnerability (Sinha et al., 2018). Finally, Cisse et al. (2017) propose new network-architectures that have small gradients by design, rather than by special training: an approach that makes all the more sense, considering the conclusion of Theorems 4 and 5. For further details and references on adversarial attacks and defenses, we refer to Yuan et al. (2017).
6 CONCLUSION
For differentiable classifiers and losses, we showed that adversarial vulnerability increases with the gradients ∂xL of the loss, which is confirmed by the near-perfect functional relationship between gradient norms and vulnerability (Figures 1&2d). We then evaluated the size of ‖∂xL‖q and showed that, at initialization, usual feed-forward nets (convolutional or fully connected) are increasingly vulnerable to `p-attacks with growing input dimension d (the image-size), almost independently of their architecture. Our experiments show that, on the tested architectures, usual training escapes those prior gradient (and vulnerability) properties on the training, but not on the test set. Schmidt et al. (2018) suggest that alleviating this generalization gap requires more data. But a natural (complementary) alternative would be to search for architectures with naturally smaller gradients, and in particular, with well-behaved priors. Despite all their limitations (being only first-order, assuming a prior weight-distribution and a differentiable loss and architecture), our theoretical insights may thereby still prove to be precious future allies.
A EFFECTS OF STRIDED AND AVERAGE-POOLING LAYERS ON ADVERSARIAL VULNERABILITY
It is common practice in CNNs to use average-pooling layers or strided convolutions to progressively decrease the number of pixels per channel. Corollary 6 shows that using strided convolutions does not protect against adversarial examples. However, what if we replace strided convolutions by convolutions with stride 1 plus an average-pooling layer? Theorem 5 considers only randomly initialized weights with typical size 1/ √ in-degree. Average-poolings however introduce deterministic weights of size 1/(in-degree). These are smaller and may therefore dampen the input-to-output gradients and protect against adversarial examples. We confirm this in our next theorem, which uses a slightly modified version (H′) of (H) to allow average pooling layers. (H′) is (H), but where the He-init H3 applies to all weights except the (deterministic) average pooling weights, and where H1 places a ReLU on every non-input and non-average-pooling neuron.
Theorem 7 (Effect of Average-Poolings). Consider a succession of convolution layers, dense layers and n average-pooling layers, in any order, that satisfies (H′) and outputs logits fk(x). Assume the n average pooling layers have a stride equal to their mask size and perform averages over a1, ..., an nodes respectively. Then ‖∂xfk‖2 and |∂xfk| scale like 1/ √ a1 · · · an and 1/ √ d a1 · · · an respectively.
Proof in Appendix B.4. Theorem 7 suggest to try and replace any strided convolution by its non-strided counterpart, followed by an average-pooling layer. It also shows that if we systematically reduce the number of pixels per channel down to 1 by using only non-strided convolutions and average-pooling layers (i.e. d = ∏n i=1 ai), then all input-to-output gradients should become independent of d, thereby making the network completely robust to adversarial examples.
Our following experiments (Figure 5) show that after training, the networks get indeed robustified to adversarial examples, but remain more vulnerable than suggested by Theorem 7.
Experimental setup. Theorem 7 shows that, contrary to strided layers, average-poolings should decrease adversarial vulnerability. We tested this hypothesis on CNNs trained on CIFAR-10, with 6 blocks of ‘convolution → BatchNorm→ReLU’ with 64 output-channels, followed by a final average pooling feeding one neuron per channel to the last fully-connected linear layer. Additionally, after every second convolution, we placed a pooling layer with stride and mask-size (2, 2) (thus acting on 2× 2
neurons at a time, without overlap). We tested average-pooling, strided and max-pooling layers and trained 20 networks per architecture. Results are shown in Figure 5. All accuracies are very close, but, as predicted, the networks with average pooling layers are more robust to adversarial images than the others. However, they remain more vulnerable than what would follow from Theorem 7. We also noticed that, contrary to the strided architectures, their gradients after training are an order of magnitude higher than at initialization and than predicted. This suggests that assumptions (H) get more violated when using average-poolings instead of strided layers. Understanding why will need further investigations.
B PROOFS
B.1 PROOF OF PROPOSITION 3
Proof. Let δ be an adversarial perturbation with ‖δ‖ = 1 that locally maximizes the loss increase at point x, meaning that δ = arg max‖δ′‖≤1∂xL · δ′. Then, by definition of the dual norm of ∂xL we have: ∂xL · ( δ) = |||∂xL|||. Thus
L̃ ,‖·‖(x, c) = 1 2 (L(x, c) + L(x+ δ, c)) = 1 2 (2L(x, c) + |∂xL · δ|+ o(‖δ‖)) =
= L(x, c) + 2 |||∂xL|||+ o( ) = L ,|||·|||(x, c) + o( ) .
B.2 PROOF OF THEOREM 4
Proof. Let x designate a generic coordinate of x. To evaluate the size of ‖∂xL‖q, we will evaluate the size of the coordinates ∂xL of ∂xL by decomposing them into
∂xL = K∑ k=1 ∂L ∂fk ∂fk ∂x =: K∑ k=1 ∂kL ∂xfk,
where fk(x) denotes the logit-probability of x belonging to class k. We now investigate the statistical properties of the logit gradients ∂xfk, and then see how they shape ∂xL.
Step 1: Statistical properties of ∂xfk. Let P(x, k) be the set of paths p from input neuron x to output-logit k. Let p− 1 and p be two successive neurons on path p, and p̃ be the same path p but without its input neuron. Let wp designate the weight from p− 1 to p and ωp be the path-product ωp := ∏ p∈p̃ wp. Finally, let σp (resp. σp) be equal to 1 if the ReLU of node p (resp. if path p) is active for input x, and 0 otherwise.
As previously noticed by Balduzzi et al. (2017) using the chain rule, we see that ∂xfk is the sum of all ωp whose path is active, i.e. ∂xfk(x) = ∑ p∈P(x,k) ωpσp. Consequently:
EW,σ [ ∂xfk(x) 2 ] = ∑
p∈P(x,k) ∏ p∈p̃ EW [ w2p ] Eσ [ σ2p ]
= |P(x, k)| ∏ p∈p̃ 2 dp−1 1 2 = ∏ p∈p̃ dp · ∏ p∈p̃ 1 dp−1 = 1 d . (8)
The first equality uses H1 to decouple the expectations over weights and ReLUs, and then applies Lemma 10 of Appendix B.3, which uses H3-H5 to kill all cross-terms and take the expectation over weights inside the product. The second equality uses H3 and the fact that the resulting product is the same for all active paths. The third equality counts the number of paths from x to k and we conclude by noting that all terms cancel out, except dp−1 from the input layer which is d. Equation 8 shows that |∂xfk| ∝ 1/ √ d.
Step 2: Statistical properties of ∂kL and ∂xL. Defining qk(x) := e fk(x)∑K
h=1 e fh(x) (the probability of image x belonging to class k according to the network), we have, by definition of the cross-entropy loss, L(x, c) := − log qc(x), where c is the label of the target class. Thus:
∂kL(x) = { −qk(x) if k 6= c 1− qc(x) otherwise, and
∂xL(x) = (1− qc) ∂xfc(x) + ∑ k 6=c qk (−∂xfk(x)). (9)
Using again Lemma 10, we see that the ∂xfk(x) are K centered and uncorrelated variables. So ∂xL(x) is approximately the sum of K uncorrelated variables with zero-mean, and its total variance is given by ( (1 − qc)2 + ∑ k 6=c q 2 k ) /d. Hence the magnitude of ∂xL(x) is 1/ √ d for all x, so the `q-norm of the full input gradient is d1/q−1/2. (6) concludes.
Remark 1. Equation 9 can be rewritten as
∂xL(x) = K∑ k=1 qk(x) ( ∂xfc(x)− ∂xfk(x) ) . (10)
As the term k = c disappears, the norm of the gradients ∂xL(x) appears to be controlled by the total error probability. This suggests that, even without regularization, trying to decrease the ordinary
classification error is still a valid strategy against adversarial examples. It reflects the fact that when increasing the classification margin, larger gradients of the classifier’s logits are needed to push images from one side of the classification boundary to the other. This is confirmed by Theorem 2.1 of Hein & Andriushchenko (2017). See also (16) in Appendix C.
B.3 PROOF OF THEOREM 5
The proof of Theorem 5 is very similar to the one of Theorem 4, but we will need to first generalize the equalities appearing in (8). To do so, we identify the computational graph of a neural network to an abstract Directed Acyclic Graph (DAG) which we use to prove the needed algebraic equalities. We then concentrate on the statistical weight-interactions implied by assumption (H), and finally throw these results together to prove the theorem. In all the proof, o will designate one of the output-logits fk(x).
Lemma 8. Let x be the vector of inputs to a given DAG, o be any leaf-node of the DAG, x a generic coordinate of x. Let p be a path from the set of paths P(x, o) from x to o, p̃ the same path without node x, p a generic node in p̃, and dp be its input-degree. Then:∑
x∈x ∑ p̃∈P(x,o) ∏ p∈p̃ 1 dp = 1 (11)
Proof. We will reason on a random walk starting at o and going up the DAG by choosing any incoming node with equal probability. The DAG being finite, this walk will end up at an input-node x with probability 1. Each path p is taken with probability ∏ p∈p̃ 1 dp
. And the probability to end up at an input-node is the sum of all these probabilities, i.e. ∑ x∈x ∑ p∈P(x,o) ∏ p∈p d −1 p , which concludes.
The sum over all inputs x in (11) being 1, on average it is 1/d for each x, where d is the total number of inputs (i.e. the length of x). It becomes an equality under assumption (S): Lemma 9. Under the symmetry assumption (S), and with the previous notations, for any input x ∈ x: ∑
p∈P(x,o) ∏ p∈p̃ 1 dp = 1 d . (12)
Proof. Let us denote D(x, o) := {dp}x∈P(x,o). Each path p in P(x, o) corresponds to exactly one element dp in D(x, o) and vice-versa. And the elements dp of dp completely determine the product∏ p∈p̃ d −1 p . By using (11) and the fact that, by (S), the multiset D(x, o) is independent of x, we hence conclude ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 1 dp = ∑ x∈x ∑ dp∈D(x,o) ∏ dp∈dp 1 dp
= d ∑
dp∈D(x,o) ∏ dp∈dp 1 dp = 1 .
Now, let us relate these considerations on graphs to gradients and use assumptions (H). We remind that path-product ωp is the product ∏ p∈p̃ wp.
Lemma 10. Under assumptions (H), the path-products ωp, ωp′ of two distinct paths p and p′ starting from a same input node x, satisfy:
EW [ωp ωp′ ] = 0 and EW [ ω2p ] = ∏ p∈p̃ EW [ w2p ] .
Furthermore, if there is at least one non-average-pooling weight on path p, then EW [ωp] = 0.
Proof. Hypothesis H4 yields
EW [ ω2p ] = EW ∏ p∈p̃ w2p = ∏ p∈p̃ EW [ w2p ] .
Now, take two different paths p and p′ that start at a same node x. Starting from x, consider the first node after which p and p′ part and call p and p′ the next nodes on p and p′ respectively. Then the weights wp and wp′ are two weights of a same node. Applying H4 and H5 hence gives
EW [ωp ωp′ ] = EW [ ωp\p ωp′\p′ ] EW [wp wp′ ] = 0 .
Finally, if p has at least one non-average-pooling node p, then successively applying H4 and H3 yields: EW [ωp] = EW [ ωp\p ] EW [wp] = 0.
We now have all elements to prove Theorem 5.
Proof. (of Theorem 5) For a given neuron p in p̃, let p − 1 designate the previous node in p of p. Let σp (resp. σp) be a variable equal to 0 if neuron p gets killed by its ReLU (resp. path p is inactive), and 1 otherwise. Then:
∂xo = ∑
p∈P(x,o) ∏ p∈p̃ ∂p−1 p = ∑ p∈P(x,o) ωp σp
Consequently:
EW,σ [ (∂xo) 2 ] = ∑
p,p′∈P(x,o)
EW [ωp ωp′ ]Eσ [σpσp′ ]
= ∑
p∈P(x,o) ∏ p∈p̃ EW [ ω2p ] Eσ [ σ2p ]
(13)
= ∑
p∈P(x,o) ∏ p∈p̃ 2 dp 1 2 = 1 d ,
where the firs line uses the independence between the ReLU killings and the weights (H1), the second uses Lemma 10 and the last uses Lemma 9. The gradient ∂xo thus has coordinates whose squared expectations scale like 1/d. Thus each coordinate scales like 1/ √ d and ‖∂xo‖q like d1/2−1/q. Conclude on ‖∂xL‖q and p ‖∂xL‖q by using Step 2 of the proof of Theorem 4. Finally, note that, even without the symmetry assumption (S), using Lemma 8 shows that
EW [ ‖∂xo‖22 ] = ∑ x∈x EW [ (∂xo) 2 ]
= ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 2 dp 1 2 = 1 .
Thus, with or without (S), ‖∂xo‖2 is independent of the input-dimension d.
B.4 PROOF OF THEOREM 7
To prove Theorem 7, we will actually prove the following more general theorem, which generalizes Theorem 5. Theorem 7 is a straightforward corollary of it. Theorem 11. Consider any feed-forward network with linear connections and ReLU activation functions that outputs logits fk(x) and satisfies assumptions (H). Suppose that there is a fixed multiset of integers {a1, . . . , an} such that each path from input to output traverses exactly n average pooling nodes with degrees {a1, . . . , an}. Then:
‖∂xfk‖2 ∝ 1∏n
i=1
√ ai . (14)
Furthermore, if the net satisfies the symmetry assumption (S), then: |∂xfk| ∝ 1√ d ∏n i=1 ai .
Two remarks. First, in all this proof, “weight” encompasses both the standard random weights, and the constant (deterministic) weights equal to 1/(in-degree) of the average-poolings. Second, assumption H5 implies that the average-pooling nodes have disjoint input nodes: otherwise, there would be two non-zero deterministic weights w,w′ from a same neuron that would hence satisfy: EW [ww′] 6= 0.
Proof. As previously, let o designate any fixed output-logit fk(x). For any path p, let a be the set of average-pooling nodes of p and let q be the set of remaining nodes. Each path-product ωp satisfies: ωp = ωqωa, where ωa is a same fixed constant. For two distinct paths p,p′, Lemma 10 therefore yields: EW [ ω2p ] = ω2a EW [ ω2q ]
and EW [ωpωp′ ] = 0. Combining this with Lemma 9 and under assumption (S), we get similarly to (13):
EW,σ [ (∂xo) 2 ] = ∑
p,p′∈P(x,o)
ωaωa′ EW [ωq ωq′ ]Eσ [σqσq′ ]
= ∑
p∈P(x,o) n∏ i=1 1 a2i ∏ q∈q̃ EW [ ω2q ] Eσ [ σ2q ]
= n∏ i=1 1
ai︸ ︷︷ ︸ same value
for all p
∑ p∈P(x,o) n∏ i=1 1 ai ∏ q∈q̃ 2 dq 1
2︸ ︷︷ ︸∏ p∈p̃
1 dp︸ ︷︷ ︸
= 1d (Lemma 9)
(15)
= 1
d n∏ i=1 1 ai .
Therefore, |∂xo| = |∂xfk| ∝ 1/ √ d ∏n i=1 ai. Again, note that, even without assumption (S), using (15) and Lemma 8 shows that
EW [ ‖∂xo‖22 ] = ∑ x∈x EW,σ [ (∂xo) 2 ]
(15) = ∑ x∈x n∏ i=1 1 ai ∑ p∈P(x,o) n∏ i=1 1 ai ∏ p∈p̃ 2 dp 1 2
= n∏ i=1 1 ai ∑ x∈x ∑ p∈P(x,o) ∏ p∈p̃ 1
dp︸ ︷︷ ︸ =1 (Lemma 8)
= n∏ i=1 1 ai ,
which proves (14).
C COMPARISON TO THE CROSS-LIPSCHITZ REGULARIZER
In their Theorem 2.1, Hein & Andriushchenko (2017) show that the minimal = ‖δ‖p perturbation to fool the classifier must be bigger than:
min k 6=c fc(x)− fk(x) maxy∈B(x, ) ‖∂xfc(y)− ∂xfk(y)‖q . (16)
They argue that the training procedure typically already tries to maximize fc(x)− fk(x), thus one only needs to additionally ensure that ‖∂xfc(x)− ∂xfk(x)‖q is small. They then introduce what they call a Cross-Lipschitz Regularization, which corresponds to the case p = 2 and involves the gradient differences between all classes:
RxLip := 1
K2 K∑ k,h=1 ‖∂xfh(x)− ∂xfk(x)‖22 (17)
In contrast, using (10), (the square of) our proposed regularizer ‖∂xL‖q from (4) can be rewritten, for p = q = 2 as:
R‖·‖2(f) = K∑
k,h=1
qk(x)qh(x) ( ∂xfc(x)− ∂xfk(x) ) ·
· ( ∂xfc(x)− ∂xfh(x) ) (18)
Although both (17) and (18) consist in K2 terms, corresponding to the K2 cross-interaction between the K classes, the big difference is that while in (17) all classes play exactly the same role, in (18) the summands all refer to the target class c in at least two different ways. First, all gradient differences are always taken with respect to ∂xfc. Second, each summand is weighted by the probabilities qk(x) and qh(x) of the two involved classes, meaning that only the classes with a non-negligible probability get their gradient regularized. This reflects the idea that only points near the margin need a gradient regularization, which incidentally will make the margin sharper.
D PERCEPTION THRESHOLD
To keep the average pixel-wise variation constant across dimensions d, we saw in (3) that the threshold p of an `p-attack should scale like d1/p. We will now see another justification for this scaling. Contrary to the rest of this work, where we use a fixed p for all images x, here we will let p depend on the `2-norm of x. If, as usual, the dataset is normalized such that the pixels have on average variance 1, both approaches are almost equivalent.
Suppose that given an `p-attack norm, we want to choose p such that the signal-to-noise ratio (SNR) ‖x‖2 / ‖δ‖2 of a perturbation δ with `p-norm ≤ p is never greater than a given SNR threshold 1/ . For p = 2 this imposes 2 = ‖x‖2. More generally, studying the inclusion of `p-balls in `2-balls yields p = ‖x‖2 d1/p−1/2 . (19) Note that this gives again p = ∞d1/p. This explains how to adjust the threshold with varying `p-attack norm.
Now, let us see how to adjust the threshold of a given `p-norm when the dimension d varies. Suppose that x is a natural image and that decreasing its dimension means either decreasing its resolution or cropping it. Because the statistics of natural images are approximately resolution and scale invariant (Huang, 2000), in either case the average squared value of the image pixels remains unchanged, which implies that ‖x‖2 scales like √ d. Pasting this back into (19), we again get:
p = ∞ d 1/p .
In particular, ∞ ∝ is a dimension-free number, exactly like in (3) of the main part. Now, why did we choose the SNR as our invariant reference quantity and not anything else? One reason is that it corresponds to a physical power ratio between the image and the perturbation, which we think the human eye is sensible to. Of course, the eye’s sensitivity also depends on the spectral frequency of the signals involved, but we are only interested in orders of magnitude here.
Another point: any image x yields an adversarial perturbation δx, where by constraint ‖x‖2 / ‖δx‖ ≤ 1/ . For `2-attacks, this inequality is actually an equality. But what about other `p-attacks: (on average over x,) how far is the signal-to-noise ratio from its imposed upper bound 1/ ? For p 6∈ {1, 2,∞}, the answer unfortunately depends on the pixel-statistics of the images. But when p is 1 or∞, then the situation is locally the same as for p = 2. Specifically: Lemma 12. Let x be a given input and > 0. Let p be the greatest threshold such that for any δ with ‖δ‖p ≤ p, the SNR ‖x‖2 / ‖δ‖2 is ≤ 1/ . Then p = ‖x‖2 d1/p−1/2. Moreover, for p ∈ {1, 2,∞}, if δx is the p-sized `p-attack that locally maximizes the loss-increase i.e. δx = arg max‖δ‖p≤ p |∂xL · δ|, then:
SNR(x) := ‖x‖2 ‖δx‖2 = 1 and Ex [SNR(x)] = 1 .
Proof. The first paragraph follows from the fact that the greatest `p-ball included in an `2-ball of radius ‖x‖2 has radius ‖x‖2 d1/p−1/2.
The second paragraph is clear for p = 2. For p =∞, it follows from the fact that δx = ∞ sign∂xL which satisfies: ‖δx‖2 = ∞ √ d = ‖x‖2. For p = 1, it is because δx = 1 maxi=1..d |(∂xL)i|,
which satisfies: ‖δx‖2 = 2/ √ d = ‖x‖2.
Intuitively, this means that for p ∈ {1, 2,∞}, the SNR of p-sized `p-attacks on any input x will be exactly equal to its fixed upper limit 1/ . And in particular, the mean SNR over samples x is the same (1/ ) in all three cases.
E VULNERABILITY-DIMENSION DEPENDENCE USING DOWNSIZED IMAGENET IMAGES
We also ran a similar experiment as in Section 4.2, but instead of using upsampled CIFAR10 images, we created a 12-class dataset of approximately 80, 000 3 × 256 × 256-sized RGBimages by merging similar ImageNet-classes, resizing the smallest image-edge to 256 pixels and
center-cropping the result. We then downsized the images to 32, 64, 128 and 256 pixels per edge, and trained, not 1, but 10 CNNs per image-size. We then computed their adversarial vulnerability and average ‖∂xL‖1. This gave us 2 values per trained net, i.e. 2 x 10 values per image-size, which are shown in 6. The lines follow their medians, the errorbars show their 10th and 90th quantiles. The conclusions are identical to Section 4.2: after usual training, the vulnerability and gradient-norms still increase like√
d. Note that, as the gradients get much larger at higher dimensions, the first order approximation in (2) becomes less and less valid, which explains the little inflection of the adversarial vulnerability curve. For smaller -thresholds, we verified that the inflection disappears.
F FIGURES WITH AN `2 PERTURBATION-THRESHOLD AND DEEP-FOOL ATTACKS
Here we plot the same curves as in the main part, but using an `2-attack threshold of size 2 = 0.005 √ d instead of the `∞-threshold and deep-fool attacks (Moosavi-Dezfooli et al., 2016) instead of iterative `∞-ones in Figs. 8 and 9. Note that contrary to `∞-thresholds, `2-thresholds must be rescaled by √ d to stay consistent across dimensions (see Eq.3 and Appendix D). All curves look essentially the same as their counterparts in the main text.
G A VARIANT OF ADVERSARIALLY-AUGMENTED TRAINING
In usual adversarially-augmented training, the adversarial image x+ δ is generated on the fly, but is nevertheless treated as a fixed input of the neural net, which means that the gradient does not get backpropagated through δ. This need not be. As δ is itself a function of x, the gradients could actually also be backpropagated through δ. As it was only a one-line change of our code, we used this opportunity to test this variant of adversarial training (FGSM-variant in Figure 2) and thank Martín Arjovsky for suggesting it. But except for an increased computation time, we found no significant difference compared to usual augmented training. | 1. What is the focus of the paper regarding neural network vulnerability?
2. What are the strengths and limitations of the theoretical analysis provided in the paper?
3. How do the experimental results support or contradict the theoretical findings?
4. Are there any concerns about the scope or applicability of the research?
5. Do you have any suggestions for future work related to this topic? | Review | Review
This paper analyzes the relationship between "adversarial vulnerability" with input dimensionality of neural network. The paper proves that, under certain assumptions, as the input dimensionality increases, neural networks exhibit increasingly large gradients thus are more adversarially vulnerable. Experiments were done on neural networks trained by penalizing input gradients and FGSM-adversarial training. Similar trends on vulnerability vs dimensionality are found.
The paper is clearly written and easy to follow. I appreciate that the authors also clearly stated the limitation of the theoretical analysis.
The theoretical analyses on vulnerability and dimensionality is novel and provide some insights. But it is unlikely such analysis is significant There are a few reasons:
- This analysis only seems to work for "well-behaved" models. For models with gradient masking, obfuscated gradients or even non-differentiable models, it is not clear that how this will apply. (and I appreciate that the authors also acknowledge this in the paper.) It is unclear how this specific gradient based analysis can help the understanding of the adversarial perturbation phenomena. After all, the first order Taylor expansion argument on top of randomly initialized weights is oversimplifying the complicated problem.
- One very important special case of the point above: the analysis probably cannot cover the adversarially PGD trained models [MMS+17] and the certifiably robust ones. Such models may have small gradients inside the box constraint, but can have large gradients between different classes.
On the empirical results, the authors made a few interesting observations, for example the close correspondence between "Adv Train" and "Grad Regu" models.
My concern is that the experiments were done on a narrow range of models, which only have "weak" adversarial training / defenses.
Adversarial robustness is hard to achieve. What matters the most is "why the strongest model is still not robust?" not "why some weak models are not robust?"
It is especially worrisome to me that the paper does not cover the adversarially-augmented training based iterative attacks, e.g. PGD TRAINED models [MMS+17] which is the SOTA on MNIST/CIFAR10 L_\infty robustness benchmark.
Without comprehensive analyses on SOTA robust models, it is hard to justify the validity of the theoretical analysis in this paper, and the conclusions made by the paper.
For example, re: the last sentence in the conclusion: "They hence suggest to tackle adversarial vulnerability by designing new architectures (or new architectural building blocks) rather than by new regularization techniques." The reasoning is not obvious to me given the current evidence shown in the paper.
[MMS+17] Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 |
ICLR | Title
Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness
Abstract
Large Mixture of Experts (MoE) models could achieve state-of-the-art quality on various language tasks, including machine translation task, thanks to the efficient model scaling capability with expert parallelism (Fedus et al., 2021). However, it has brought a fundamental issue of larger memory consumption at deployment time. Furthermore, this results in significant inference speed degradation at autoregressive decoding steps due to the increased memory transfers. In this paper, we propose Mixture of Quantized Experts (MoQE) which is a simple weight-only quantization method applying ultra low-bit down to 2-bit quantizations only to expert weights for mitigating the increased memory and latency issues of MoE models. We show that low-bit quantization together with the MoE architecture delivers a reliable model performance while reducing the memory size significantly even without any additional training. Especially, expert layers in MoE models are much more robust to the quantization than conventional feedforward networks (FFN) layers. In our comprehensive analysis, we show that MoE models with 2-bit and 80% sparse expert weights can deliver better model performance than the dense model trained on the same dataset. We present how quantization of different parts of models affects the performance with various experiments using a large MoE model (5.3 B). As a result of low-bit quantization, we show the model size can be reduced by 79.6% of the original half precision floating point (fp16) MoE model. This cuts down the model size of 5.3B parameters from 8.4x of the dense model to only 1.7x of the dense model after 2-bit quantization. It still preserves 1.88% higher accuracy than the dense model. Combined with an optimized GPU runtime implementation, it also achieves 2.7X speed-up which is even slightly faster than the FLOPs equivalent dense model.
1 INTRODUCTION
Large Language Models (LLMs) have shown their effectiveness on various language tasks by increasing the number of trainable parameters together with the framework of pre-training a model on a large scale data and using it to different downstream tasks (Devlin et al., 2018; Radford et al., 2018; Liu et al., 2019; Raffel et al., 2020). With the advancement of distributed large scale training methods (Shazeer et al., 2018; Rasley et al., 2020; Ren et al., 2021; Baines et al., 2021) and large scale data collection (Raffel et al., 2020; Hoffmann et al., 2022), the models get even larger and break state-of-the-art performance with the increased model capacity (Brown et al., 2020; Rae et al., 2021; Zoph et al., 2022; Zhang et al., 2022; Smith et al., 2022; Chowdhery et al., 2022). However, the cost of training these models increases whenever more parameters are added, and this may not be sustainable.
As a solution to address this issue, sparsely activated models (Shazeer et al., 2017) are more widely adopted and show significant efficiency improvements in terms of model size scaling while enabling up to trillions of parameters to be trained more efficiently and achieving better model accuracy (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021; Artetxe et al., 2021). Mixture-of-Experts (MoE) models are one type of sparsely activated models replacing a single layer in a model with a group of parallel layers which are called experts combined with a gate layer. For a given input, the gate layer selects a subset of the experts from the group, and use them for processing
the input. By limiting the number of subset layers for a given input to one or two, the theoretical FLOPs stays almost constant even if we add hundreds of parallel layers into the MoE group. Thus far, most studies have shown that it is effective to increase the capacity of the models by replacing feedforward networks (FFN) of Transformer (Vaswani et al., 2017) blocks with MoE layer consists of multiple FFN layers together with a gating network (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021; Artetxe et al., 2021). One of the most unique and critical components of MoE models is the gating network which decides how to conditionally select experts for each input, and there have been various studies to improve it to achieve a better training convergence ((Lewis et al., 2021; Roller et al., 2021; Zuo et al., 2021; Clark et al., 2022; Liu et al., 2022; Zhou et al., 2022) and they are well surveyed in Fedus et al. (2022).
In spite of the progress on the training of MoE models, there have been only a few handfuls of studies related to MoE model inference. Rajbhandari et al. (2022) designs a more efficient MoE architecture and distributed runtime to achieve 7.3X inference speed-up. Kudugunta et al. (2021) uses task specific information to reduce the size of the model at deployment time by only loading task specific experts. Kim et al. (2021) prunes some experts at deployment time to reduce the model size by trading-off model performance. Zoph et al. (2022) uses knowledge distillation technique to distill a large MoE model into a smaller dense model to reduce the memory consumption and improve the throughput. Even with all the proposed techniques, there has not been a solution to accelerate the inference of MoE models while maintaining the accuracy.
Quantization is a type of model acceleration and compression techniques by estimating a floating point number into a smaller precision number. There are various studies that show quantization is effective to accelerate neural network model inference (Rodriguez et al., 2018; Stock et al., 2019; Choukroun et al., 2019; Gholami et al., 2022). Especially, it has been known to be very effective in natural language generation such as machine translation ((Kim et al., 2019; Aji & Heafield, 2020; Fan et al., 2021)) and natural language understanding (Kim & Awadalla, 2020) tasks. However, there has not been an in-depth study about how quantization works with large MoE models.
Recently, Dettmers et al. (2022); Yao et al. (2022) have studied how quantization works on large scale language models. Dettmers et al. (2022) looks at outlier features in the activations of large language models, and proposes to decompose them while performing matrix multiplications. In our quantization method, this is not needed because it is a weight-only quantization and outliers in activations cannot affect the performance. And, the weights are dequantized back to fp16 while matrix multiplication is done. This also makes our approach not require a special low-bit instructions. And, we show that this can be applied to lower bits than 8-bit for large MoE models. ZeroQuant (Yao et al., 2022) presents a series of techniques including knowledge distillation (Kim & Rush, 2016) for achieving a higher quality quantization. Our focus is to exploit the intrinsic characteristics of MoE layers based on our investigation, and we show that a simple quantization algorithm can achieve significantly higher efficiency and maintain the quality at the same time.
Our contributions in this paper are as below.
• We present extensive studies about how applying low-bit (down to 2-bits) quantization to different layers of MoE transformer models affects the model accuracy together with comparisons to the corresponding dense model with the same embedding size.
• We show that expert weights are highly robust to the quantization, therefore they can be quantized to 3-bit without additional training or calibration data and to 2-bit with Quantization Aware Training (QAT) which results in 79.6% reduction in memory size. Combined with a runtime optimization, we show that the method boosts the inference speed significantly more than 2.7X faster. We leverage the memory bounded characteristic of auto-regressive decoders, so reduced memory bottleneck improves the overall efficiency even with additional dequantization steps in our procedure. Based on the observations, we propose a new framework named Mixture of Quantized Experts (MoQE) which is a simple weight-only quantization method only applied to MoE expert weights.
• Finally, we show an emerging sparsity of more than 80% in the expert weights to be zero from 2-bit quantization. The expert weight matrices are sparse and very low-precision at the same time, while still outperforming the dense counterpart trained on the same dataset.
2 BACKGROUND - CHALLENGES OF DEPLOYING MOE MODELS
In the widely used MoE architecture, even with a constant or only sub-linearly higher theoretical FLOPs by using top-1 or top-2 gating, the increased model size with additional experts has a serious negative impact on the inference performance in various aspects.
2.1 INCREASED MEMORY FOOTPRINT
First of all, due to the increased model size, the model requires much more accelerator memory. With modern accelerators like GPUs, the accelerator memory size is limited. So, more accelerators are required to handle 1 model which causes communication problem described next. Also, the model takes up more memory, so the batch size is limited to be small which prevents the optimal utilization of processing cores.
2.2 SLOWER INFERENCE SPEED
Increased communication overhead. In the distributed training and inference set-up for large scale models, it is natural to use many GPUs or accelerators for a single model. The model weights can be distributed across different accelerators with various techniques (Ren et al., 2021) and expert parallelism (Fedus et al., 2021). However, in Liu et al. (2022), it is shown that the communication overhead with expert parallelism at training time could take up to more than half of the entire end-to-end time depending on the number of GPUs and clusters. This could affect inference efficiency even more severely because inference usually needs fewer FLOPs numbers than training, and communication bottleneck will stand out more. Therefore, it is desirable to use as few numbers of accelerators as possible to avoid this overhead.
Memory bandwidth bottleneck with MoE layers. The increase in the model size not only causes communication overhead, but also brings a significant inference speed impact on the modern processor architectures. While performing beam search decoding, the size of activation (an individual token) is relatively small and the decoding operation is memory bandwidth bounded. This means transferring model weight matrices in a memory hierarchy is a huge bottleneck. With the increased number of experts, the burden of memory transfer increases even more, and directly impacts the inference speed.
Inference speed measurement. Table 1 shows an actual speed difference measured with dense and MoE models on an NVIDIA’s V100 GPU. Two models are encoder and decoder based on the transformer architecture (Vaswani et al., 2017), and have exactly the same model settings except for the number of experts. The speed measurements are done on the translation task from German to English using auto-regressive beam search with beam size of five. Both models are evaluated on the same PyTorch 1 with half-precision floating point (fp16). The MoE model uses top-1 gating which assigns only one expert for a given input token which provides the same theoretical FLOPs as the corresponding dense model (with the same embedding size). Due to the excessive memory transfer caused by the increased number of experts, the actual inference speed decreases by 60% of the original dense model’s speed as shown in the table.
To overcome these challenges, we focus on reducing the model size utilizing quantization. Especially, increased model size and latency are mostly from the expert FFN weights which contribute
1https://github.com/pytorch/pytorch
92.8 % of all weights in this specific model setting, so the FFN weights are our main target for the optimization. With an emerged sparsity in expert weights from the low-bit quantization, we also explore a further sparsification opportunity with a simple magnitude pruning technique.
3 QUANTIZATION METHODS FOR MOE LAYERS
There are multiple design choices to quantize model weights. In this section, we analyze the numerical characteristics of different layers in a large MoE model, and describe the decisions we have made to most effectively quantize the MoE layers.
3.1 NUMERICAL DISTRIBUTION OF MODEL WEIGHTS
While quantizing matrices, it is desired not to have outliers, but to have smoothly distributed numerical values. Outliers usually skew the range to be quantized and scaling factors get too large. Figure 1 shows weight distribution box plots of linear layers in the MoE model’s FFN blocks. Following the widely used practice, an MoE layer is in every other layer (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021). Even number layers {0, 2, ...} are expert FFN layers and odd number layers {1, 3, ...} are normal dense FFN layers. First, all of them are centered around zero. However, dense FFN layers have a much larger range than MoE FFN layers. This indicates that dense FFN layers have more outliers than MoE FFN layers. This phenomenon is more prevalent in the second linear layers sometimes reaching down to −8.0 which is shown in Figure 1b. Figure 2 shows example histograms of an expert FFN weight and a dense FFN weight. As can be seen in Figure 2b, the example dense FFN layer suffers from outliers seriously. However, expert FFN weights in Figure 2a show smooth distribution without any major outliers. We observe the similar pattern all across different layers and different experts. In Appendix C, we additionally include the statistics of overall layers. This statistical observation indicates that MoE FFN layers would be well suited for the quantization.
Based on the observation, the FFN weights are following a normal distribution with a mean value near zero. Therefore, we use symmetric quantization without needing to shift the zero point. Even for the dense FFN layers, the means and the standard deviations are around zero except for the outliers which can be seen in the box plot of Figure 1. This symmetric quantization also gives an advantage to quantize many weight values near center to zero which could result in a sparse model weight.
3.2 QUANTIZATION ALGORITHMS
3.2.1 QUANTIZATION TECHNIQUES
We try two quantization techniques, they are (i) linear quantization which is mapping quantized integer values and the original float value uniformly and (ii) log-based quantization from Aji & Heafield (2020) which maps integer and float ranges in a log scale. In both cases, we choose channelwise quantization over matrix-wise quantization based on the experiment in Appendix A.
Linear quantization with absolute maximum. The first technique is linear quantization which, given a matrix A and b bits, it encodes A as follows:
sj = 2×max(|A:,j |)
2b − 1 Q:,j = int(
A:,j sj )
where s is the scaling factor which can be chosen per channel as shown or per the whole tensor. At inference time, the quantized Q is dequantized back to A ′ with the scaling factor s as follows:
A ′
:,j = Q:,j × sj
Log-scale quantization. The second technique is log-scale quantization where 1 bit is kept for the sign and (b− 1) bits are used to encode the log-scaled values. Given a matrix A, the quantization formula is as follows:
P = sign(A)
T = clip( |A| s , 1, 21−2 b−1)
Q = ⌈log2( 2
3 T )⌉
where s can be chosen in two ways, either (i) the absolute maximum or (ii) the optimal value to minimize the mean squared error (MSE) between the quantized and original values which is described in Aji & Heafield (2020). We use the second algorithm which we observe a better accuracy with the quantization. At inference time, the quantized weight values are dequantized based on the formula as follows:
A ′ = P × s× 2Q
Comparison of quantization techniques. Figure 3 shows the comparison between two quantization techniques with low bits applied on expert FFN layers and dense FFN layers. For dense FFN layers, log-scale quantization performs slightly better, but both do not work well on 2-bit resulting in almost zero evaluation scores. For expert FFN layers, both techniques work similarly for 3 and 4 bits, but log-scale quantization loses the accuracy seriously with 2-bit. This is because there are only 4 bins for the integer values to quantize with 2-bit quantization and one of them is zero. Log-scale tries to split values near zero in a more fine-grained way, but this actually hurts the performance compared to having enough zeros with linear quantization. Based on this experiment, we use linear quantization for compressing MoE FFN layers.
3.2.2 ROBUSTNESS OF EXPERT LAYERS TO QUANTIZATION
To better understand how applying quantization on different parts of an MoE model affects the accuracy, we conduct a set of experiments with various quantization bits. We divide an MoE model
into four parts: (i) expert FFNs, (ii) dense FFN layers, (iii) self-attention layers and (iv) crossattention layers. Based on the observation that linear quantization works better with lower bits, we use it for this set of experiments.
Figure 4 shows evaluation BLEU scores when quantizing different parts of the MoE model. We observe that quantizing expert FFN layers to 2-bit does not seriously impact the overall model quality. However, quantizing other parts of the model into 2-bit hurts the output quality significantly. Quantized cross-attention and self-attention blocks still can maintain the quality with 3-bit quantization, but their performance gets impacted with 2-bit quantization. On the other hand, dense FFN layers get significant impact with lower bit quantization of 2-bit and 3-bit. With 3-bit quantization, the model score drops 23 % of original score, and 2-bit quantization on dense FFN layers gives almost zero score. We also include the same study on a dense model in Appendix B, the similar pattern with 2 and 3 bit quantization is observed.
3.3 MIXTURE OF QUANTIZED EXPERTS (MOQE)
Based on the experiments from the previous parts of this section, we propose a very simple, highly effective and accurate quantization recipe for MoE models.
• Apply weight-only quantization while keeping activations in fp16. • Quantize expert FFN layers only. • Use channel-wise and symmetric quantization. • Choose one of either two quantization methods depending on the quantization precision
1. (3-bit or higher bit): Directly quatize trained MoE models without additional calibration.
2. (2-bit): Fine-tune the model with Quantization Aware Training (QAT) which the describtion follows.
Sparse experts with quantization aware training QAT is a well-known method used to recover the accuracy loss from the quantization (Gholami et al., 2022). In our case, to quantize to 2-bit precision, we can continue training the model with the original training data while applying quantization only on the forward pass computation as presented in (Wu et al., 2020; Bengio
et al., 2013) for recovering the accuracy loss. As we use symmetric quantization with 2-bit, zero numerical value is always included. Due to the normal distribution of expert weights centered around zero, many weight values naturally turn into zeros. This procedure results in sparse expert matrices.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Task. We use multilingual machine translation task for our experiments with two different dataset which are 20 language directions and 10 language directions respectively. We also evaluate the proposed method on a different task presented in Appendix D. We use sacrebleu 2 on the detokenized output to measure the accuracy of the models. A single NVIDIA PCIE V100 running inside a docker container running Ubuntu 20.04 and CUDA 11.6 is used for all experiments, and all code is compiled with nvcc and gcc/g++ 9.3. We measure end-to-end runtime of the inference for the evaluation dataset.
Datasets. We use two different datasets described below. For the larger dataset setting, we use internally collected dataset consists of 6 different languages which are German (de), French (fr), Italian (it), Spanish (es), Dutch (nl) and English (en). They are crawled from web, and each language pair has at least several hundred million sentences. We use 128,000 sub-words vocabulary built with sentencepiece3 library. The number of training sentences is included in Appendix G. For the smaller dataset setting, we use WMT-10 benchmark dataset widely used for public benchmarks (Wang et al., 2020; Kim et al., 2021). There are 32.5 million sentence pairs for English-centric 20 language pairs including French (fr), Czech(cs), German (de), Finnish (fi), Latvian (lt), Estonian (et), Romanian (ro), Hindi (hi), Turkish(tr) and Gujarati (gu).
Model architecture. For all the experiments with large dataset, we use 24 transformer (Vaswani et al., 2017) encoder layers and 12 transformer decoder layers following the deeper encoder and shallower decoder practice (Kim et al., 2019; Kasai et al., 2021) to be more efficient at auto-regressive decoding. The embedding dimension is 1, 024 and FFN hidden dimension is 4, 096. For the positional information encoding to the hidden state, we use Transformer with Untied Positional Encoding (TUPE) proposed in Ke et al. (2021) instead of the conventional sinusoidal positional embedding. Another design choice is the location of layer normalization. For the training stability, we use pre-layer normalization proposed in Xiong et al. (2020) instead of the original post-layer normalization from (Vaswani et al., 2017). We train MoE and dense models for the comparison. The model architecture choices mentioned here are common for both models. The only difference between dense and MoE models is the number of experts. We use 32 experts for the MoE model trained with the larger web data. We use beam search decoding with beam size of 5. For the experiments with smaller dataset, we use 12 transformer encoder layers and 6 transformer decoder layers. The embedding dimension is 768 and FFN hidden dimension is 3, 072. In this setting, we use MoE layers with 128 experts at every other layer.
MoE architecture. For the MoE model specific settings, we use top-1 learned gating from Fedus et al. (2021) and use an MoE layer at every other layer which are even numbered layers (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021). During the training of MoE models, we use jittering noise and balancing loss (ratio of 0.01) suggested in Lepikhin et al. (2020); Fedus et al. (2021) to more uniformly distribute expert utilization. To prevent overfitting and better regularize the model, we use gating dropout (0.2) (Liu et al., 2022) as well.
4.2 MOQE PERFORMANCE RESULTS
We apply MoQE quantization recipe to an MoE model and compare the performance with several dense models in Table 2. This experiment is done on the larger web dataset. The baseline is a dense model trained on the same dataset as the MoE model. Throughput, memory size and sparsity are all measured with the fp16 precision model. As additional comparison points, the dense model is also
2https://github.com/mjpost/sacrebleu 3https://github.com/google/sentencepiece
quantized to 8-bit and 4-bit only on the even numbered FFN layers which is the best configuration for quantizing the dense model described in Appendix B. For the MoE model, various quantization settings ranging from 8-bit to 2-bit are measured together with the original fp16 performance. For 2- bit quantization, additional QAT is applied. Finally, we applied magnitude based pruning approach to the 2-bit quantized model to acquire a sparser model.
First of all, the MoE model achieves 2.87% improvement on the BLEU score while increasing the model size to 8.38X of the original dense model. When 4-bit post-training quantization is applied, it still maintains 2.11% higher BLEU score than the original dense model. And, it could achieve even faster speed than the dense model which is 2.7X speed-up from the fp16 MoE model. This also reduces the memory consumption significantly from 8.38X to 2.67X compared to the dense model. With 2-bit QAT, the MoE model can still maintain 1.88% higher quality than the original dense model, but the model size is now only 1.71X of the original dense model. Also, the matrices are sparse up to 79.1% of the values are zeros.
Figure 5 shows the sparsity distribution of different layers. The second linear layers after the nonlinear activation layers show higher sparsity compared to the first linear layers. Some layers could reach up to 85% sparsity. We include a further investigation of sparsity with magnitude based pruning approach in Appendix I.
4.3 ROBUSTNESS COMPARISON BETWEEN MOE AND DENSE MODELS
We compare robustness against low-bit quantization between MoE and dense models using the posttraining quantization without any QAT. For the dense model, quantization with different bits is applied to the even numbered FFN layers. Appendix B shows this is the best layer selection for the dense model. We use two different datasets to verify the proposed quantization method works in different model settings.
Figure 6 presents the experiment with the model trained with the larger dataset. It shows the average BLEU scores with different quantization precision for both MoE and dense models. The MoE model can maintain accuracy within -0.3 down to 3-bit and -1.82 for 2-bit. On the other hand, the dense model can preserve the accuracy only down to 4-bit, but starts to lose significant accuracy more than 2 BLEU scores when it goes down to 3-bits. In case of 2-bits, dense model loses most of capability by -42.96 BLEU scores. Table 9 shows the score differences by quantization for both MoE and dense models on 10 different language pairs translations.
Figure 7 presents the experiment with the model trained with the smaller dataset. In this setting, each individual expert is smaller, but there are 4 times more experts in one MoE layer. And, they are trained with smaller dataset, so they do not have equivalent knowledge as the previous model trained on the larger dataset. As can be seen in the Figure, the quantization performance shows a similar pattern. The MoE model preserves accuracy even when it is quantized to 2 or 3 bits. However, dense model quickly loses the performance when it is quantized down to lower than 4-bit. Again, the MoE model is much more robust to quantization than the dense model.
5 CONCLUSION AND FUTURE WORKS
This paper shows how much MoE models are robust to the low-bit quantization with various experiments. By analyzing component-wise sensitivity and various quantization design choices, we present an efficient and effective way to reduce the model size which results in 4.9X model size reduction. With an optimized runtime, 4-bit quantized model can run 2.71X faster than the fp16 model. We also show 2-bit quantization could achieve more than 79% sparsity in the expert weights. The results naturally bring interesting future research directions. The discovered robustness of expert layers can guide a better way to train MoE models. If we can better control the splitting of latent space, better MoE models can be acquired. Analyzing the interactions between expert FFN layers and the other common layers in the model could guide a way to build a composable model. Especially, as presented in Appendix E, we observe that quantization sometimes improves the accuracy on tasks in a specific situation. Another important direction will be studying how to accelerate sparse expert computation on modern hardware with software/hardware co-design. This will eventually make MoE models much more efficient in both training and inference.
A CHANNEL-WISE VS MATRIX-WISE QUANTIZATION
Scaling factors are calculated by the quantization algorithm and stored in half precision floatingpoint (fp16) numbers to dequantize the matrices with. These factors can be chosen on the channel scale or the whole matrix scale. As shown in figure 8, channel-wise quantization gives quite higher scores than tensor-wise especially for low precision. Additional parameters to store channel-wise scaling factors is small, because only one value is needed for a channel and less than 1% of total parameters in a matrix. Therefore, we use channel-wise quantization for all the quantization experiments.
B QUANTIZATION OF DIFFERENT LAYERS IN A DENSE MODEL
In the paper, we compare a dense model and an MoE model in terms of quantization robustness. To make a fair comparison, we consider quantizing only half of the dense transformer blocks’ FFNs, because we quantize expert weights only which exist only in every other block (even numbered). We compare three different configurations - (1) quantizing even numbered blocks’ FFNs only, (2) quantizing odd numbered blocks’ FFNs only and (3) quantizing all FFN layers. As can be seen in Figure B, quantizing even numbered blocks’ FFNs affects the accuracy the least, and quantizing all FFN layers give the worst result. Based on this experiment, we quantize only even numbered transformer blocks’ FFNs for the dense model in all the experiments and comparisons.
C SKEWNESS OF WEIGHT MATRICES IN MOE AND DENSE MODELS
In the analysis of model weight distribution in Section 3, we observe that dense models’ FFN layers tend to have more outliers than MoEs’ expert FFN layers. We measure the skewness of weight distribution of those in Table 3.
D ABSTRACTIVE SUMMARIZATION TASK PERFORMANCE
To validate the quantization performs well on a different task and a model, we evaluate a 10.1 B MoE (64 experts) model’s quantization performance on an abstractive summarization task called XSUM (Narayan et al., 2018). Table 4 shows that the MoE model performs well with low-bit quantizations such as 2-bits and 3-bits.
E BETTER GENERALIZATION WITH EXPERT QUANTIZATION
We observe an interesting phenomenon that quantization actually improves the score of evaluation on a different domain dataset. We trained an MoE model with 64 experts (10.1B) on 50 different language translations (98 English-centric language pairs). When we evaluate this model on a different domain subset 6 languages (German, Spanish, French, Italian, Dutch, English), the evaluation BLEU score increases until we quantize the experts down to 3-bits without any additional QAT or calibrations. With 3-bit quantization, the score increases more than 6.42% on non-English to English and 6.96% on English to the others. Especially, from English to Italian and from Italian to English scores increase more than 10% which quite significant. The results are summarized in Table 5. We are analyzing what could be the reason for this phenomenon, but we think this is related to how MoE models learn representations. MoE layers might learn very specific knowledge with its increased capacity, but the shared layers learn more generic knowledge. By blurring the representation from the MoE layers, the model becomes more general task capable. This is one of our future research areas.
F ADDITIONAL SPARSITY DISTRIBUTION
We include sparsity distribution across different layers in a dense model quantized with 4-bit. As can be seen in Figure 10, the sparsity is overall low. Second linear layers in FFN show slightly higher sparsity, but all of them are smaller than 30%.
G MACHINE TRANSLATION DATASET SUMMARY
Table 6 shows the number of parallel sentences used to train dense and MoE models. All languages have at least 300 million sentences and the differences in the number among languages are less than two times.
H QUANTIZATION AWARE TRAINING (QAT)
For the QAT with straight through estimator, we use the hyper-parameters as in Table 7. Figure 11 shows the validation loss curve of one training run with 2-bit expert quantization.
I MAGNITUDE PRUNING EXPERIMENTS
Inspired by the emerged sparsity in expert layers, we apply a simple magnitude based pruning to the MoE model we experiment with. We apply different threshold values from 0.00001 to 0.5. We make all the weight values less than the threshold to be zero. We apply 2 to 8 bit quantization together. Figure 12 shows how model performance varies with the achieved sparsity. Even with sparsity level of 90%, the model preserves a certain level of task capability. Compared to Gale et al. (2019), this shows much higher performance with the same sparsity. This could be another example showing the robustness of expert weights.
J DETAILED BLEU SCORE DIFFERENCES WITH QUANTIZATION APPLIED TO THE MODEL TRAINED ON PUBLIC WMT DATASET
Table 8 shows individual BLEU score changes with various quantization bits for MoE and dense models trained on public WMT dataset.
K DETAILED BLEU SCORE DIFFERENCES WITH QUANTIZATION APPLIED TO
Table 9 shows individual BLEU score changes with various quantization bits for MoE and dense models measured with the internal validation dataset. Table 10 shows the same model’s evaluation performance on two WMT public dataset. | 1. What is the focus of the paper regarding MoE model acceleration?
2. What are the strengths and weaknesses of the proposed weight-only quantization method?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or suggestions regarding the comparisons with other quantization methods for MoE? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a simple weight-only quantization method to quantize the MoE model. This method combines several common quantization techniques to quantize and reduce the model size. The experiment results show that the method can be used for accelerating the MoE model.
Strengths And Weaknesses
Strength: The paper applies the quantization to reduce the model size and accelerate the model inference. The paper conduct several experiments to show the effectiveness of the method.
Weakness: This paper lacks the comparison with other quantization methods for MoE. This paper needs more valuable analysis for applying the quantization into MoE since using quantization to reduce the model size to speedup is a common technique and result.
Clarity, Quality, Novelty And Reproducibility
This paper clearly explains the challenges for MoE deployment and introduce its weight-only technique. This paper needs more valuable analysis instead of the common phenomenon in quantization. |
ICLR | Title
Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness
Abstract
Large Mixture of Experts (MoE) models could achieve state-of-the-art quality on various language tasks, including machine translation task, thanks to the efficient model scaling capability with expert parallelism (Fedus et al., 2021). However, it has brought a fundamental issue of larger memory consumption at deployment time. Furthermore, this results in significant inference speed degradation at autoregressive decoding steps due to the increased memory transfers. In this paper, we propose Mixture of Quantized Experts (MoQE) which is a simple weight-only quantization method applying ultra low-bit down to 2-bit quantizations only to expert weights for mitigating the increased memory and latency issues of MoE models. We show that low-bit quantization together with the MoE architecture delivers a reliable model performance while reducing the memory size significantly even without any additional training. Especially, expert layers in MoE models are much more robust to the quantization than conventional feedforward networks (FFN) layers. In our comprehensive analysis, we show that MoE models with 2-bit and 80% sparse expert weights can deliver better model performance than the dense model trained on the same dataset. We present how quantization of different parts of models affects the performance with various experiments using a large MoE model (5.3 B). As a result of low-bit quantization, we show the model size can be reduced by 79.6% of the original half precision floating point (fp16) MoE model. This cuts down the model size of 5.3B parameters from 8.4x of the dense model to only 1.7x of the dense model after 2-bit quantization. It still preserves 1.88% higher accuracy than the dense model. Combined with an optimized GPU runtime implementation, it also achieves 2.7X speed-up which is even slightly faster than the FLOPs equivalent dense model.
1 INTRODUCTION
Large Language Models (LLMs) have shown their effectiveness on various language tasks by increasing the number of trainable parameters together with the framework of pre-training a model on a large scale data and using it to different downstream tasks (Devlin et al., 2018; Radford et al., 2018; Liu et al., 2019; Raffel et al., 2020). With the advancement of distributed large scale training methods (Shazeer et al., 2018; Rasley et al., 2020; Ren et al., 2021; Baines et al., 2021) and large scale data collection (Raffel et al., 2020; Hoffmann et al., 2022), the models get even larger and break state-of-the-art performance with the increased model capacity (Brown et al., 2020; Rae et al., 2021; Zoph et al., 2022; Zhang et al., 2022; Smith et al., 2022; Chowdhery et al., 2022). However, the cost of training these models increases whenever more parameters are added, and this may not be sustainable.
As a solution to address this issue, sparsely activated models (Shazeer et al., 2017) are more widely adopted and show significant efficiency improvements in terms of model size scaling while enabling up to trillions of parameters to be trained more efficiently and achieving better model accuracy (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021; Artetxe et al., 2021). Mixture-of-Experts (MoE) models are one type of sparsely activated models replacing a single layer in a model with a group of parallel layers which are called experts combined with a gate layer. For a given input, the gate layer selects a subset of the experts from the group, and use them for processing
the input. By limiting the number of subset layers for a given input to one or two, the theoretical FLOPs stays almost constant even if we add hundreds of parallel layers into the MoE group. Thus far, most studies have shown that it is effective to increase the capacity of the models by replacing feedforward networks (FFN) of Transformer (Vaswani et al., 2017) blocks with MoE layer consists of multiple FFN layers together with a gating network (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021; Artetxe et al., 2021). One of the most unique and critical components of MoE models is the gating network which decides how to conditionally select experts for each input, and there have been various studies to improve it to achieve a better training convergence ((Lewis et al., 2021; Roller et al., 2021; Zuo et al., 2021; Clark et al., 2022; Liu et al., 2022; Zhou et al., 2022) and they are well surveyed in Fedus et al. (2022).
In spite of the progress on the training of MoE models, there have been only a few handfuls of studies related to MoE model inference. Rajbhandari et al. (2022) designs a more efficient MoE architecture and distributed runtime to achieve 7.3X inference speed-up. Kudugunta et al. (2021) uses task specific information to reduce the size of the model at deployment time by only loading task specific experts. Kim et al. (2021) prunes some experts at deployment time to reduce the model size by trading-off model performance. Zoph et al. (2022) uses knowledge distillation technique to distill a large MoE model into a smaller dense model to reduce the memory consumption and improve the throughput. Even with all the proposed techniques, there has not been a solution to accelerate the inference of MoE models while maintaining the accuracy.
Quantization is a type of model acceleration and compression techniques by estimating a floating point number into a smaller precision number. There are various studies that show quantization is effective to accelerate neural network model inference (Rodriguez et al., 2018; Stock et al., 2019; Choukroun et al., 2019; Gholami et al., 2022). Especially, it has been known to be very effective in natural language generation such as machine translation ((Kim et al., 2019; Aji & Heafield, 2020; Fan et al., 2021)) and natural language understanding (Kim & Awadalla, 2020) tasks. However, there has not been an in-depth study about how quantization works with large MoE models.
Recently, Dettmers et al. (2022); Yao et al. (2022) have studied how quantization works on large scale language models. Dettmers et al. (2022) looks at outlier features in the activations of large language models, and proposes to decompose them while performing matrix multiplications. In our quantization method, this is not needed because it is a weight-only quantization and outliers in activations cannot affect the performance. And, the weights are dequantized back to fp16 while matrix multiplication is done. This also makes our approach not require a special low-bit instructions. And, we show that this can be applied to lower bits than 8-bit for large MoE models. ZeroQuant (Yao et al., 2022) presents a series of techniques including knowledge distillation (Kim & Rush, 2016) for achieving a higher quality quantization. Our focus is to exploit the intrinsic characteristics of MoE layers based on our investigation, and we show that a simple quantization algorithm can achieve significantly higher efficiency and maintain the quality at the same time.
Our contributions in this paper are as below.
• We present extensive studies about how applying low-bit (down to 2-bits) quantization to different layers of MoE transformer models affects the model accuracy together with comparisons to the corresponding dense model with the same embedding size.
• We show that expert weights are highly robust to the quantization, therefore they can be quantized to 3-bit without additional training or calibration data and to 2-bit with Quantization Aware Training (QAT) which results in 79.6% reduction in memory size. Combined with a runtime optimization, we show that the method boosts the inference speed significantly more than 2.7X faster. We leverage the memory bounded characteristic of auto-regressive decoders, so reduced memory bottleneck improves the overall efficiency even with additional dequantization steps in our procedure. Based on the observations, we propose a new framework named Mixture of Quantized Experts (MoQE) which is a simple weight-only quantization method only applied to MoE expert weights.
• Finally, we show an emerging sparsity of more than 80% in the expert weights to be zero from 2-bit quantization. The expert weight matrices are sparse and very low-precision at the same time, while still outperforming the dense counterpart trained on the same dataset.
2 BACKGROUND - CHALLENGES OF DEPLOYING MOE MODELS
In the widely used MoE architecture, even with a constant or only sub-linearly higher theoretical FLOPs by using top-1 or top-2 gating, the increased model size with additional experts has a serious negative impact on the inference performance in various aspects.
2.1 INCREASED MEMORY FOOTPRINT
First of all, due to the increased model size, the model requires much more accelerator memory. With modern accelerators like GPUs, the accelerator memory size is limited. So, more accelerators are required to handle 1 model which causes communication problem described next. Also, the model takes up more memory, so the batch size is limited to be small which prevents the optimal utilization of processing cores.
2.2 SLOWER INFERENCE SPEED
Increased communication overhead. In the distributed training and inference set-up for large scale models, it is natural to use many GPUs or accelerators for a single model. The model weights can be distributed across different accelerators with various techniques (Ren et al., 2021) and expert parallelism (Fedus et al., 2021). However, in Liu et al. (2022), it is shown that the communication overhead with expert parallelism at training time could take up to more than half of the entire end-to-end time depending on the number of GPUs and clusters. This could affect inference efficiency even more severely because inference usually needs fewer FLOPs numbers than training, and communication bottleneck will stand out more. Therefore, it is desirable to use as few numbers of accelerators as possible to avoid this overhead.
Memory bandwidth bottleneck with MoE layers. The increase in the model size not only causes communication overhead, but also brings a significant inference speed impact on the modern processor architectures. While performing beam search decoding, the size of activation (an individual token) is relatively small and the decoding operation is memory bandwidth bounded. This means transferring model weight matrices in a memory hierarchy is a huge bottleneck. With the increased number of experts, the burden of memory transfer increases even more, and directly impacts the inference speed.
Inference speed measurement. Table 1 shows an actual speed difference measured with dense and MoE models on an NVIDIA’s V100 GPU. Two models are encoder and decoder based on the transformer architecture (Vaswani et al., 2017), and have exactly the same model settings except for the number of experts. The speed measurements are done on the translation task from German to English using auto-regressive beam search with beam size of five. Both models are evaluated on the same PyTorch 1 with half-precision floating point (fp16). The MoE model uses top-1 gating which assigns only one expert for a given input token which provides the same theoretical FLOPs as the corresponding dense model (with the same embedding size). Due to the excessive memory transfer caused by the increased number of experts, the actual inference speed decreases by 60% of the original dense model’s speed as shown in the table.
To overcome these challenges, we focus on reducing the model size utilizing quantization. Especially, increased model size and latency are mostly from the expert FFN weights which contribute
1https://github.com/pytorch/pytorch
92.8 % of all weights in this specific model setting, so the FFN weights are our main target for the optimization. With an emerged sparsity in expert weights from the low-bit quantization, we also explore a further sparsification opportunity with a simple magnitude pruning technique.
3 QUANTIZATION METHODS FOR MOE LAYERS
There are multiple design choices to quantize model weights. In this section, we analyze the numerical characteristics of different layers in a large MoE model, and describe the decisions we have made to most effectively quantize the MoE layers.
3.1 NUMERICAL DISTRIBUTION OF MODEL WEIGHTS
While quantizing matrices, it is desired not to have outliers, but to have smoothly distributed numerical values. Outliers usually skew the range to be quantized and scaling factors get too large. Figure 1 shows weight distribution box plots of linear layers in the MoE model’s FFN blocks. Following the widely used practice, an MoE layer is in every other layer (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021). Even number layers {0, 2, ...} are expert FFN layers and odd number layers {1, 3, ...} are normal dense FFN layers. First, all of them are centered around zero. However, dense FFN layers have a much larger range than MoE FFN layers. This indicates that dense FFN layers have more outliers than MoE FFN layers. This phenomenon is more prevalent in the second linear layers sometimes reaching down to −8.0 which is shown in Figure 1b. Figure 2 shows example histograms of an expert FFN weight and a dense FFN weight. As can be seen in Figure 2b, the example dense FFN layer suffers from outliers seriously. However, expert FFN weights in Figure 2a show smooth distribution without any major outliers. We observe the similar pattern all across different layers and different experts. In Appendix C, we additionally include the statistics of overall layers. This statistical observation indicates that MoE FFN layers would be well suited for the quantization.
Based on the observation, the FFN weights are following a normal distribution with a mean value near zero. Therefore, we use symmetric quantization without needing to shift the zero point. Even for the dense FFN layers, the means and the standard deviations are around zero except for the outliers which can be seen in the box plot of Figure 1. This symmetric quantization also gives an advantage to quantize many weight values near center to zero which could result in a sparse model weight.
3.2 QUANTIZATION ALGORITHMS
3.2.1 QUANTIZATION TECHNIQUES
We try two quantization techniques, they are (i) linear quantization which is mapping quantized integer values and the original float value uniformly and (ii) log-based quantization from Aji & Heafield (2020) which maps integer and float ranges in a log scale. In both cases, we choose channelwise quantization over matrix-wise quantization based on the experiment in Appendix A.
Linear quantization with absolute maximum. The first technique is linear quantization which, given a matrix A and b bits, it encodes A as follows:
sj = 2×max(|A:,j |)
2b − 1 Q:,j = int(
A:,j sj )
where s is the scaling factor which can be chosen per channel as shown or per the whole tensor. At inference time, the quantized Q is dequantized back to A ′ with the scaling factor s as follows:
A ′
:,j = Q:,j × sj
Log-scale quantization. The second technique is log-scale quantization where 1 bit is kept for the sign and (b− 1) bits are used to encode the log-scaled values. Given a matrix A, the quantization formula is as follows:
P = sign(A)
T = clip( |A| s , 1, 21−2 b−1)
Q = ⌈log2( 2
3 T )⌉
where s can be chosen in two ways, either (i) the absolute maximum or (ii) the optimal value to minimize the mean squared error (MSE) between the quantized and original values which is described in Aji & Heafield (2020). We use the second algorithm which we observe a better accuracy with the quantization. At inference time, the quantized weight values are dequantized based on the formula as follows:
A ′ = P × s× 2Q
Comparison of quantization techniques. Figure 3 shows the comparison between two quantization techniques with low bits applied on expert FFN layers and dense FFN layers. For dense FFN layers, log-scale quantization performs slightly better, but both do not work well on 2-bit resulting in almost zero evaluation scores. For expert FFN layers, both techniques work similarly for 3 and 4 bits, but log-scale quantization loses the accuracy seriously with 2-bit. This is because there are only 4 bins for the integer values to quantize with 2-bit quantization and one of them is zero. Log-scale tries to split values near zero in a more fine-grained way, but this actually hurts the performance compared to having enough zeros with linear quantization. Based on this experiment, we use linear quantization for compressing MoE FFN layers.
3.2.2 ROBUSTNESS OF EXPERT LAYERS TO QUANTIZATION
To better understand how applying quantization on different parts of an MoE model affects the accuracy, we conduct a set of experiments with various quantization bits. We divide an MoE model
into four parts: (i) expert FFNs, (ii) dense FFN layers, (iii) self-attention layers and (iv) crossattention layers. Based on the observation that linear quantization works better with lower bits, we use it for this set of experiments.
Figure 4 shows evaluation BLEU scores when quantizing different parts of the MoE model. We observe that quantizing expert FFN layers to 2-bit does not seriously impact the overall model quality. However, quantizing other parts of the model into 2-bit hurts the output quality significantly. Quantized cross-attention and self-attention blocks still can maintain the quality with 3-bit quantization, but their performance gets impacted with 2-bit quantization. On the other hand, dense FFN layers get significant impact with lower bit quantization of 2-bit and 3-bit. With 3-bit quantization, the model score drops 23 % of original score, and 2-bit quantization on dense FFN layers gives almost zero score. We also include the same study on a dense model in Appendix B, the similar pattern with 2 and 3 bit quantization is observed.
3.3 MIXTURE OF QUANTIZED EXPERTS (MOQE)
Based on the experiments from the previous parts of this section, we propose a very simple, highly effective and accurate quantization recipe for MoE models.
• Apply weight-only quantization while keeping activations in fp16. • Quantize expert FFN layers only. • Use channel-wise and symmetric quantization. • Choose one of either two quantization methods depending on the quantization precision
1. (3-bit or higher bit): Directly quatize trained MoE models without additional calibration.
2. (2-bit): Fine-tune the model with Quantization Aware Training (QAT) which the describtion follows.
Sparse experts with quantization aware training QAT is a well-known method used to recover the accuracy loss from the quantization (Gholami et al., 2022). In our case, to quantize to 2-bit precision, we can continue training the model with the original training data while applying quantization only on the forward pass computation as presented in (Wu et al., 2020; Bengio
et al., 2013) for recovering the accuracy loss. As we use symmetric quantization with 2-bit, zero numerical value is always included. Due to the normal distribution of expert weights centered around zero, many weight values naturally turn into zeros. This procedure results in sparse expert matrices.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Task. We use multilingual machine translation task for our experiments with two different dataset which are 20 language directions and 10 language directions respectively. We also evaluate the proposed method on a different task presented in Appendix D. We use sacrebleu 2 on the detokenized output to measure the accuracy of the models. A single NVIDIA PCIE V100 running inside a docker container running Ubuntu 20.04 and CUDA 11.6 is used for all experiments, and all code is compiled with nvcc and gcc/g++ 9.3. We measure end-to-end runtime of the inference for the evaluation dataset.
Datasets. We use two different datasets described below. For the larger dataset setting, we use internally collected dataset consists of 6 different languages which are German (de), French (fr), Italian (it), Spanish (es), Dutch (nl) and English (en). They are crawled from web, and each language pair has at least several hundred million sentences. We use 128,000 sub-words vocabulary built with sentencepiece3 library. The number of training sentences is included in Appendix G. For the smaller dataset setting, we use WMT-10 benchmark dataset widely used for public benchmarks (Wang et al., 2020; Kim et al., 2021). There are 32.5 million sentence pairs for English-centric 20 language pairs including French (fr), Czech(cs), German (de), Finnish (fi), Latvian (lt), Estonian (et), Romanian (ro), Hindi (hi), Turkish(tr) and Gujarati (gu).
Model architecture. For all the experiments with large dataset, we use 24 transformer (Vaswani et al., 2017) encoder layers and 12 transformer decoder layers following the deeper encoder and shallower decoder practice (Kim et al., 2019; Kasai et al., 2021) to be more efficient at auto-regressive decoding. The embedding dimension is 1, 024 and FFN hidden dimension is 4, 096. For the positional information encoding to the hidden state, we use Transformer with Untied Positional Encoding (TUPE) proposed in Ke et al. (2021) instead of the conventional sinusoidal positional embedding. Another design choice is the location of layer normalization. For the training stability, we use pre-layer normalization proposed in Xiong et al. (2020) instead of the original post-layer normalization from (Vaswani et al., 2017). We train MoE and dense models for the comparison. The model architecture choices mentioned here are common for both models. The only difference between dense and MoE models is the number of experts. We use 32 experts for the MoE model trained with the larger web data. We use beam search decoding with beam size of 5. For the experiments with smaller dataset, we use 12 transformer encoder layers and 6 transformer decoder layers. The embedding dimension is 768 and FFN hidden dimension is 3, 072. In this setting, we use MoE layers with 128 experts at every other layer.
MoE architecture. For the MoE model specific settings, we use top-1 learned gating from Fedus et al. (2021) and use an MoE layer at every other layer which are even numbered layers (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021). During the training of MoE models, we use jittering noise and balancing loss (ratio of 0.01) suggested in Lepikhin et al. (2020); Fedus et al. (2021) to more uniformly distribute expert utilization. To prevent overfitting and better regularize the model, we use gating dropout (0.2) (Liu et al., 2022) as well.
4.2 MOQE PERFORMANCE RESULTS
We apply MoQE quantization recipe to an MoE model and compare the performance with several dense models in Table 2. This experiment is done on the larger web dataset. The baseline is a dense model trained on the same dataset as the MoE model. Throughput, memory size and sparsity are all measured with the fp16 precision model. As additional comparison points, the dense model is also
2https://github.com/mjpost/sacrebleu 3https://github.com/google/sentencepiece
quantized to 8-bit and 4-bit only on the even numbered FFN layers which is the best configuration for quantizing the dense model described in Appendix B. For the MoE model, various quantization settings ranging from 8-bit to 2-bit are measured together with the original fp16 performance. For 2- bit quantization, additional QAT is applied. Finally, we applied magnitude based pruning approach to the 2-bit quantized model to acquire a sparser model.
First of all, the MoE model achieves 2.87% improvement on the BLEU score while increasing the model size to 8.38X of the original dense model. When 4-bit post-training quantization is applied, it still maintains 2.11% higher BLEU score than the original dense model. And, it could achieve even faster speed than the dense model which is 2.7X speed-up from the fp16 MoE model. This also reduces the memory consumption significantly from 8.38X to 2.67X compared to the dense model. With 2-bit QAT, the MoE model can still maintain 1.88% higher quality than the original dense model, but the model size is now only 1.71X of the original dense model. Also, the matrices are sparse up to 79.1% of the values are zeros.
Figure 5 shows the sparsity distribution of different layers. The second linear layers after the nonlinear activation layers show higher sparsity compared to the first linear layers. Some layers could reach up to 85% sparsity. We include a further investigation of sparsity with magnitude based pruning approach in Appendix I.
4.3 ROBUSTNESS COMPARISON BETWEEN MOE AND DENSE MODELS
We compare robustness against low-bit quantization between MoE and dense models using the posttraining quantization without any QAT. For the dense model, quantization with different bits is applied to the even numbered FFN layers. Appendix B shows this is the best layer selection for the dense model. We use two different datasets to verify the proposed quantization method works in different model settings.
Figure 6 presents the experiment with the model trained with the larger dataset. It shows the average BLEU scores with different quantization precision for both MoE and dense models. The MoE model can maintain accuracy within -0.3 down to 3-bit and -1.82 for 2-bit. On the other hand, the dense model can preserve the accuracy only down to 4-bit, but starts to lose significant accuracy more than 2 BLEU scores when it goes down to 3-bits. In case of 2-bits, dense model loses most of capability by -42.96 BLEU scores. Table 9 shows the score differences by quantization for both MoE and dense models on 10 different language pairs translations.
Figure 7 presents the experiment with the model trained with the smaller dataset. In this setting, each individual expert is smaller, but there are 4 times more experts in one MoE layer. And, they are trained with smaller dataset, so they do not have equivalent knowledge as the previous model trained on the larger dataset. As can be seen in the Figure, the quantization performance shows a similar pattern. The MoE model preserves accuracy even when it is quantized to 2 or 3 bits. However, dense model quickly loses the performance when it is quantized down to lower than 4-bit. Again, the MoE model is much more robust to quantization than the dense model.
5 CONCLUSION AND FUTURE WORKS
This paper shows how much MoE models are robust to the low-bit quantization with various experiments. By analyzing component-wise sensitivity and various quantization design choices, we present an efficient and effective way to reduce the model size which results in 4.9X model size reduction. With an optimized runtime, 4-bit quantized model can run 2.71X faster than the fp16 model. We also show 2-bit quantization could achieve more than 79% sparsity in the expert weights. The results naturally bring interesting future research directions. The discovered robustness of expert layers can guide a better way to train MoE models. If we can better control the splitting of latent space, better MoE models can be acquired. Analyzing the interactions between expert FFN layers and the other common layers in the model could guide a way to build a composable model. Especially, as presented in Appendix E, we observe that quantization sometimes improves the accuracy on tasks in a specific situation. Another important direction will be studying how to accelerate sparse expert computation on modern hardware with software/hardware co-design. This will eventually make MoE models much more efficient in both training and inference.
A CHANNEL-WISE VS MATRIX-WISE QUANTIZATION
Scaling factors are calculated by the quantization algorithm and stored in half precision floatingpoint (fp16) numbers to dequantize the matrices with. These factors can be chosen on the channel scale or the whole matrix scale. As shown in figure 8, channel-wise quantization gives quite higher scores than tensor-wise especially for low precision. Additional parameters to store channel-wise scaling factors is small, because only one value is needed for a channel and less than 1% of total parameters in a matrix. Therefore, we use channel-wise quantization for all the quantization experiments.
B QUANTIZATION OF DIFFERENT LAYERS IN A DENSE MODEL
In the paper, we compare a dense model and an MoE model in terms of quantization robustness. To make a fair comparison, we consider quantizing only half of the dense transformer blocks’ FFNs, because we quantize expert weights only which exist only in every other block (even numbered). We compare three different configurations - (1) quantizing even numbered blocks’ FFNs only, (2) quantizing odd numbered blocks’ FFNs only and (3) quantizing all FFN layers. As can be seen in Figure B, quantizing even numbered blocks’ FFNs affects the accuracy the least, and quantizing all FFN layers give the worst result. Based on this experiment, we quantize only even numbered transformer blocks’ FFNs for the dense model in all the experiments and comparisons.
C SKEWNESS OF WEIGHT MATRICES IN MOE AND DENSE MODELS
In the analysis of model weight distribution in Section 3, we observe that dense models’ FFN layers tend to have more outliers than MoEs’ expert FFN layers. We measure the skewness of weight distribution of those in Table 3.
D ABSTRACTIVE SUMMARIZATION TASK PERFORMANCE
To validate the quantization performs well on a different task and a model, we evaluate a 10.1 B MoE (64 experts) model’s quantization performance on an abstractive summarization task called XSUM (Narayan et al., 2018). Table 4 shows that the MoE model performs well with low-bit quantizations such as 2-bits and 3-bits.
E BETTER GENERALIZATION WITH EXPERT QUANTIZATION
We observe an interesting phenomenon that quantization actually improves the score of evaluation on a different domain dataset. We trained an MoE model with 64 experts (10.1B) on 50 different language translations (98 English-centric language pairs). When we evaluate this model on a different domain subset 6 languages (German, Spanish, French, Italian, Dutch, English), the evaluation BLEU score increases until we quantize the experts down to 3-bits without any additional QAT or calibrations. With 3-bit quantization, the score increases more than 6.42% on non-English to English and 6.96% on English to the others. Especially, from English to Italian and from Italian to English scores increase more than 10% which quite significant. The results are summarized in Table 5. We are analyzing what could be the reason for this phenomenon, but we think this is related to how MoE models learn representations. MoE layers might learn very specific knowledge with its increased capacity, but the shared layers learn more generic knowledge. By blurring the representation from the MoE layers, the model becomes more general task capable. This is one of our future research areas.
F ADDITIONAL SPARSITY DISTRIBUTION
We include sparsity distribution across different layers in a dense model quantized with 4-bit. As can be seen in Figure 10, the sparsity is overall low. Second linear layers in FFN show slightly higher sparsity, but all of them are smaller than 30%.
G MACHINE TRANSLATION DATASET SUMMARY
Table 6 shows the number of parallel sentences used to train dense and MoE models. All languages have at least 300 million sentences and the differences in the number among languages are less than two times.
H QUANTIZATION AWARE TRAINING (QAT)
For the QAT with straight through estimator, we use the hyper-parameters as in Table 7. Figure 11 shows the validation loss curve of one training run with 2-bit expert quantization.
I MAGNITUDE PRUNING EXPERIMENTS
Inspired by the emerged sparsity in expert layers, we apply a simple magnitude based pruning to the MoE model we experiment with. We apply different threshold values from 0.00001 to 0.5. We make all the weight values less than the threshold to be zero. We apply 2 to 8 bit quantization together. Figure 12 shows how model performance varies with the achieved sparsity. Even with sparsity level of 90%, the model preserves a certain level of task capability. Compared to Gale et al. (2019), this shows much higher performance with the same sparsity. This could be another example showing the robustness of expert weights.
J DETAILED BLEU SCORE DIFFERENCES WITH QUANTIZATION APPLIED TO THE MODEL TRAINED ON PUBLIC WMT DATASET
Table 8 shows individual BLEU score changes with various quantization bits for MoE and dense models trained on public WMT dataset.
K DETAILED BLEU SCORE DIFFERENCES WITH QUANTIZATION APPLIED TO
Table 9 shows individual BLEU score changes with various quantization bits for MoE and dense models measured with the internal validation dataset. Table 10 shows the same model’s evaluation performance on two WMT public dataset. | 1. What is the focus of the paper regarding MoE models and quantization?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of empirical study and thoroughness?
3. Do you have any concerns or suggestions regarding the writing style and clarity of the paper?
4. How does the reviewer assess the novelty and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper studies the effects and technique of quantizing the mixture of experts (MoE) models originating in the field of NLP. These MoE models are constructed to have gating layers that route inputs through certain matrices, and typically, the chosen gating settings are such that overall FLOPs are the same as of a single (non-MoE) network. As such, the only issue is saving and loading the larger MoE network: according to Table 1, MoE network of 32 experts is requires 9x more storage and has only 0.4x thorouput of non MoE network (despite having same theoretical FLOPs). In this paper, authors empirically study whether quantization is a viable mechanism to reduce the size of MoE networks and present their findings: using 2-bit quantization on expert layers and performing additional finetuning (QAT), it is possible to achieve an MoE network of 32 experts that is only 1.7x times larger than dense baseline (non-compresses MoE is 8.4x of dense network) Their full recipe is available in Section 3.3.
Strengths And Weaknesses
After reading the paper I can identify following strengths and weaknesses. Strengths first:
The paper is one of the first to give an empirical study of quantization of MoE networks. It would be a good manual/starting point for practitioners in the field.
Weaknessess:
Thoroughness: Despite having good results and having investigated several quantization options, one would still have questions of "what if?" style. There are many additional experiments and empirical evaluations that are needed to make it a stronger contribution, and to be certain of presented recommendations. For instance here are additional questions: 1) if inference happens in fp16, why to stick with uniform or log-uniform quantization schemes? how about non-inform quantization akin k-means? 2) why not to consider finer grouping for quantization instead of per-tensor and per-channel? 3) why PTQ calibration techniques are not discussed? are all calibrations work the same? 4) what is the tradeoff between # experts vs bit-width of compression? are there certain recommendation? and many other questions of this format
The paper would benefit from another proof-reading pass: there are many places where it is hard to understand what was exactly meant.
Clarity, Quality, Novelty And Reproducibility
The authors make a great effort at describing their experimental setup and I believe that all required details are reasonably disclosed. However, I would like to point out some writing issues that should be addressed:
Generally, whenever there is times X smaller/larger, these sentences are very confusing. Examples: "we show the model size can be reduced 4.9X smaller than" or "This cuts down the model size of 5.3B parameters from 8.4x of the dense model to only 1.7x dense model".
other unclear sentences: "the actual inference speed decreases slower than 40% of ", "weights normally distributed centered around zero", "we use the second optimal value algorithm which results in better quantization accuracy". For instance, for the last sentence: was there a first optimal value algorithm? (answer: no) What is quantization accuracy? (should be accuracy of quantized model) |
ICLR | Title
Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness
Abstract
Large Mixture of Experts (MoE) models could achieve state-of-the-art quality on various language tasks, including machine translation task, thanks to the efficient model scaling capability with expert parallelism (Fedus et al., 2021). However, it has brought a fundamental issue of larger memory consumption at deployment time. Furthermore, this results in significant inference speed degradation at autoregressive decoding steps due to the increased memory transfers. In this paper, we propose Mixture of Quantized Experts (MoQE) which is a simple weight-only quantization method applying ultra low-bit down to 2-bit quantizations only to expert weights for mitigating the increased memory and latency issues of MoE models. We show that low-bit quantization together with the MoE architecture delivers a reliable model performance while reducing the memory size significantly even without any additional training. Especially, expert layers in MoE models are much more robust to the quantization than conventional feedforward networks (FFN) layers. In our comprehensive analysis, we show that MoE models with 2-bit and 80% sparse expert weights can deliver better model performance than the dense model trained on the same dataset. We present how quantization of different parts of models affects the performance with various experiments using a large MoE model (5.3 B). As a result of low-bit quantization, we show the model size can be reduced by 79.6% of the original half precision floating point (fp16) MoE model. This cuts down the model size of 5.3B parameters from 8.4x of the dense model to only 1.7x of the dense model after 2-bit quantization. It still preserves 1.88% higher accuracy than the dense model. Combined with an optimized GPU runtime implementation, it also achieves 2.7X speed-up which is even slightly faster than the FLOPs equivalent dense model.
1 INTRODUCTION
Large Language Models (LLMs) have shown their effectiveness on various language tasks by increasing the number of trainable parameters together with the framework of pre-training a model on a large scale data and using it to different downstream tasks (Devlin et al., 2018; Radford et al., 2018; Liu et al., 2019; Raffel et al., 2020). With the advancement of distributed large scale training methods (Shazeer et al., 2018; Rasley et al., 2020; Ren et al., 2021; Baines et al., 2021) and large scale data collection (Raffel et al., 2020; Hoffmann et al., 2022), the models get even larger and break state-of-the-art performance with the increased model capacity (Brown et al., 2020; Rae et al., 2021; Zoph et al., 2022; Zhang et al., 2022; Smith et al., 2022; Chowdhery et al., 2022). However, the cost of training these models increases whenever more parameters are added, and this may not be sustainable.
As a solution to address this issue, sparsely activated models (Shazeer et al., 2017) are more widely adopted and show significant efficiency improvements in terms of model size scaling while enabling up to trillions of parameters to be trained more efficiently and achieving better model accuracy (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021; Artetxe et al., 2021). Mixture-of-Experts (MoE) models are one type of sparsely activated models replacing a single layer in a model with a group of parallel layers which are called experts combined with a gate layer. For a given input, the gate layer selects a subset of the experts from the group, and use them for processing
the input. By limiting the number of subset layers for a given input to one or two, the theoretical FLOPs stays almost constant even if we add hundreds of parallel layers into the MoE group. Thus far, most studies have shown that it is effective to increase the capacity of the models by replacing feedforward networks (FFN) of Transformer (Vaswani et al., 2017) blocks with MoE layer consists of multiple FFN layers together with a gating network (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021; Artetxe et al., 2021). One of the most unique and critical components of MoE models is the gating network which decides how to conditionally select experts for each input, and there have been various studies to improve it to achieve a better training convergence ((Lewis et al., 2021; Roller et al., 2021; Zuo et al., 2021; Clark et al., 2022; Liu et al., 2022; Zhou et al., 2022) and they are well surveyed in Fedus et al. (2022).
In spite of the progress on the training of MoE models, there have been only a few handfuls of studies related to MoE model inference. Rajbhandari et al. (2022) designs a more efficient MoE architecture and distributed runtime to achieve 7.3X inference speed-up. Kudugunta et al. (2021) uses task specific information to reduce the size of the model at deployment time by only loading task specific experts. Kim et al. (2021) prunes some experts at deployment time to reduce the model size by trading-off model performance. Zoph et al. (2022) uses knowledge distillation technique to distill a large MoE model into a smaller dense model to reduce the memory consumption and improve the throughput. Even with all the proposed techniques, there has not been a solution to accelerate the inference of MoE models while maintaining the accuracy.
Quantization is a type of model acceleration and compression techniques by estimating a floating point number into a smaller precision number. There are various studies that show quantization is effective to accelerate neural network model inference (Rodriguez et al., 2018; Stock et al., 2019; Choukroun et al., 2019; Gholami et al., 2022). Especially, it has been known to be very effective in natural language generation such as machine translation ((Kim et al., 2019; Aji & Heafield, 2020; Fan et al., 2021)) and natural language understanding (Kim & Awadalla, 2020) tasks. However, there has not been an in-depth study about how quantization works with large MoE models.
Recently, Dettmers et al. (2022); Yao et al. (2022) have studied how quantization works on large scale language models. Dettmers et al. (2022) looks at outlier features in the activations of large language models, and proposes to decompose them while performing matrix multiplications. In our quantization method, this is not needed because it is a weight-only quantization and outliers in activations cannot affect the performance. And, the weights are dequantized back to fp16 while matrix multiplication is done. This also makes our approach not require a special low-bit instructions. And, we show that this can be applied to lower bits than 8-bit for large MoE models. ZeroQuant (Yao et al., 2022) presents a series of techniques including knowledge distillation (Kim & Rush, 2016) for achieving a higher quality quantization. Our focus is to exploit the intrinsic characteristics of MoE layers based on our investigation, and we show that a simple quantization algorithm can achieve significantly higher efficiency and maintain the quality at the same time.
Our contributions in this paper are as below.
• We present extensive studies about how applying low-bit (down to 2-bits) quantization to different layers of MoE transformer models affects the model accuracy together with comparisons to the corresponding dense model with the same embedding size.
• We show that expert weights are highly robust to the quantization, therefore they can be quantized to 3-bit without additional training or calibration data and to 2-bit with Quantization Aware Training (QAT) which results in 79.6% reduction in memory size. Combined with a runtime optimization, we show that the method boosts the inference speed significantly more than 2.7X faster. We leverage the memory bounded characteristic of auto-regressive decoders, so reduced memory bottleneck improves the overall efficiency even with additional dequantization steps in our procedure. Based on the observations, we propose a new framework named Mixture of Quantized Experts (MoQE) which is a simple weight-only quantization method only applied to MoE expert weights.
• Finally, we show an emerging sparsity of more than 80% in the expert weights to be zero from 2-bit quantization. The expert weight matrices are sparse and very low-precision at the same time, while still outperforming the dense counterpart trained on the same dataset.
2 BACKGROUND - CHALLENGES OF DEPLOYING MOE MODELS
In the widely used MoE architecture, even with a constant or only sub-linearly higher theoretical FLOPs by using top-1 or top-2 gating, the increased model size with additional experts has a serious negative impact on the inference performance in various aspects.
2.1 INCREASED MEMORY FOOTPRINT
First of all, due to the increased model size, the model requires much more accelerator memory. With modern accelerators like GPUs, the accelerator memory size is limited. So, more accelerators are required to handle 1 model which causes communication problem described next. Also, the model takes up more memory, so the batch size is limited to be small which prevents the optimal utilization of processing cores.
2.2 SLOWER INFERENCE SPEED
Increased communication overhead. In the distributed training and inference set-up for large scale models, it is natural to use many GPUs or accelerators for a single model. The model weights can be distributed across different accelerators with various techniques (Ren et al., 2021) and expert parallelism (Fedus et al., 2021). However, in Liu et al. (2022), it is shown that the communication overhead with expert parallelism at training time could take up to more than half of the entire end-to-end time depending on the number of GPUs and clusters. This could affect inference efficiency even more severely because inference usually needs fewer FLOPs numbers than training, and communication bottleneck will stand out more. Therefore, it is desirable to use as few numbers of accelerators as possible to avoid this overhead.
Memory bandwidth bottleneck with MoE layers. The increase in the model size not only causes communication overhead, but also brings a significant inference speed impact on the modern processor architectures. While performing beam search decoding, the size of activation (an individual token) is relatively small and the decoding operation is memory bandwidth bounded. This means transferring model weight matrices in a memory hierarchy is a huge bottleneck. With the increased number of experts, the burden of memory transfer increases even more, and directly impacts the inference speed.
Inference speed measurement. Table 1 shows an actual speed difference measured with dense and MoE models on an NVIDIA’s V100 GPU. Two models are encoder and decoder based on the transformer architecture (Vaswani et al., 2017), and have exactly the same model settings except for the number of experts. The speed measurements are done on the translation task from German to English using auto-regressive beam search with beam size of five. Both models are evaluated on the same PyTorch 1 with half-precision floating point (fp16). The MoE model uses top-1 gating which assigns only one expert for a given input token which provides the same theoretical FLOPs as the corresponding dense model (with the same embedding size). Due to the excessive memory transfer caused by the increased number of experts, the actual inference speed decreases by 60% of the original dense model’s speed as shown in the table.
To overcome these challenges, we focus on reducing the model size utilizing quantization. Especially, increased model size and latency are mostly from the expert FFN weights which contribute
1https://github.com/pytorch/pytorch
92.8 % of all weights in this specific model setting, so the FFN weights are our main target for the optimization. With an emerged sparsity in expert weights from the low-bit quantization, we also explore a further sparsification opportunity with a simple magnitude pruning technique.
3 QUANTIZATION METHODS FOR MOE LAYERS
There are multiple design choices to quantize model weights. In this section, we analyze the numerical characteristics of different layers in a large MoE model, and describe the decisions we have made to most effectively quantize the MoE layers.
3.1 NUMERICAL DISTRIBUTION OF MODEL WEIGHTS
While quantizing matrices, it is desired not to have outliers, but to have smoothly distributed numerical values. Outliers usually skew the range to be quantized and scaling factors get too large. Figure 1 shows weight distribution box plots of linear layers in the MoE model’s FFN blocks. Following the widely used practice, an MoE layer is in every other layer (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021). Even number layers {0, 2, ...} are expert FFN layers and odd number layers {1, 3, ...} are normal dense FFN layers. First, all of them are centered around zero. However, dense FFN layers have a much larger range than MoE FFN layers. This indicates that dense FFN layers have more outliers than MoE FFN layers. This phenomenon is more prevalent in the second linear layers sometimes reaching down to −8.0 which is shown in Figure 1b. Figure 2 shows example histograms of an expert FFN weight and a dense FFN weight. As can be seen in Figure 2b, the example dense FFN layer suffers from outliers seriously. However, expert FFN weights in Figure 2a show smooth distribution without any major outliers. We observe the similar pattern all across different layers and different experts. In Appendix C, we additionally include the statistics of overall layers. This statistical observation indicates that MoE FFN layers would be well suited for the quantization.
Based on the observation, the FFN weights are following a normal distribution with a mean value near zero. Therefore, we use symmetric quantization without needing to shift the zero point. Even for the dense FFN layers, the means and the standard deviations are around zero except for the outliers which can be seen in the box plot of Figure 1. This symmetric quantization also gives an advantage to quantize many weight values near center to zero which could result in a sparse model weight.
3.2 QUANTIZATION ALGORITHMS
3.2.1 QUANTIZATION TECHNIQUES
We try two quantization techniques, they are (i) linear quantization which is mapping quantized integer values and the original float value uniformly and (ii) log-based quantization from Aji & Heafield (2020) which maps integer and float ranges in a log scale. In both cases, we choose channelwise quantization over matrix-wise quantization based on the experiment in Appendix A.
Linear quantization with absolute maximum. The first technique is linear quantization which, given a matrix A and b bits, it encodes A as follows:
sj = 2×max(|A:,j |)
2b − 1 Q:,j = int(
A:,j sj )
where s is the scaling factor which can be chosen per channel as shown or per the whole tensor. At inference time, the quantized Q is dequantized back to A ′ with the scaling factor s as follows:
A ′
:,j = Q:,j × sj
Log-scale quantization. The second technique is log-scale quantization where 1 bit is kept for the sign and (b− 1) bits are used to encode the log-scaled values. Given a matrix A, the quantization formula is as follows:
P = sign(A)
T = clip( |A| s , 1, 21−2 b−1)
Q = ⌈log2( 2
3 T )⌉
where s can be chosen in two ways, either (i) the absolute maximum or (ii) the optimal value to minimize the mean squared error (MSE) between the quantized and original values which is described in Aji & Heafield (2020). We use the second algorithm which we observe a better accuracy with the quantization. At inference time, the quantized weight values are dequantized based on the formula as follows:
A ′ = P × s× 2Q
Comparison of quantization techniques. Figure 3 shows the comparison between two quantization techniques with low bits applied on expert FFN layers and dense FFN layers. For dense FFN layers, log-scale quantization performs slightly better, but both do not work well on 2-bit resulting in almost zero evaluation scores. For expert FFN layers, both techniques work similarly for 3 and 4 bits, but log-scale quantization loses the accuracy seriously with 2-bit. This is because there are only 4 bins for the integer values to quantize with 2-bit quantization and one of them is zero. Log-scale tries to split values near zero in a more fine-grained way, but this actually hurts the performance compared to having enough zeros with linear quantization. Based on this experiment, we use linear quantization for compressing MoE FFN layers.
3.2.2 ROBUSTNESS OF EXPERT LAYERS TO QUANTIZATION
To better understand how applying quantization on different parts of an MoE model affects the accuracy, we conduct a set of experiments with various quantization bits. We divide an MoE model
into four parts: (i) expert FFNs, (ii) dense FFN layers, (iii) self-attention layers and (iv) crossattention layers. Based on the observation that linear quantization works better with lower bits, we use it for this set of experiments.
Figure 4 shows evaluation BLEU scores when quantizing different parts of the MoE model. We observe that quantizing expert FFN layers to 2-bit does not seriously impact the overall model quality. However, quantizing other parts of the model into 2-bit hurts the output quality significantly. Quantized cross-attention and self-attention blocks still can maintain the quality with 3-bit quantization, but their performance gets impacted with 2-bit quantization. On the other hand, dense FFN layers get significant impact with lower bit quantization of 2-bit and 3-bit. With 3-bit quantization, the model score drops 23 % of original score, and 2-bit quantization on dense FFN layers gives almost zero score. We also include the same study on a dense model in Appendix B, the similar pattern with 2 and 3 bit quantization is observed.
3.3 MIXTURE OF QUANTIZED EXPERTS (MOQE)
Based on the experiments from the previous parts of this section, we propose a very simple, highly effective and accurate quantization recipe for MoE models.
• Apply weight-only quantization while keeping activations in fp16. • Quantize expert FFN layers only. • Use channel-wise and symmetric quantization. • Choose one of either two quantization methods depending on the quantization precision
1. (3-bit or higher bit): Directly quatize trained MoE models without additional calibration.
2. (2-bit): Fine-tune the model with Quantization Aware Training (QAT) which the describtion follows.
Sparse experts with quantization aware training QAT is a well-known method used to recover the accuracy loss from the quantization (Gholami et al., 2022). In our case, to quantize to 2-bit precision, we can continue training the model with the original training data while applying quantization only on the forward pass computation as presented in (Wu et al., 2020; Bengio
et al., 2013) for recovering the accuracy loss. As we use symmetric quantization with 2-bit, zero numerical value is always included. Due to the normal distribution of expert weights centered around zero, many weight values naturally turn into zeros. This procedure results in sparse expert matrices.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Task. We use multilingual machine translation task for our experiments with two different dataset which are 20 language directions and 10 language directions respectively. We also evaluate the proposed method on a different task presented in Appendix D. We use sacrebleu 2 on the detokenized output to measure the accuracy of the models. A single NVIDIA PCIE V100 running inside a docker container running Ubuntu 20.04 and CUDA 11.6 is used for all experiments, and all code is compiled with nvcc and gcc/g++ 9.3. We measure end-to-end runtime of the inference for the evaluation dataset.
Datasets. We use two different datasets described below. For the larger dataset setting, we use internally collected dataset consists of 6 different languages which are German (de), French (fr), Italian (it), Spanish (es), Dutch (nl) and English (en). They are crawled from web, and each language pair has at least several hundred million sentences. We use 128,000 sub-words vocabulary built with sentencepiece3 library. The number of training sentences is included in Appendix G. For the smaller dataset setting, we use WMT-10 benchmark dataset widely used for public benchmarks (Wang et al., 2020; Kim et al., 2021). There are 32.5 million sentence pairs for English-centric 20 language pairs including French (fr), Czech(cs), German (de), Finnish (fi), Latvian (lt), Estonian (et), Romanian (ro), Hindi (hi), Turkish(tr) and Gujarati (gu).
Model architecture. For all the experiments with large dataset, we use 24 transformer (Vaswani et al., 2017) encoder layers and 12 transformer decoder layers following the deeper encoder and shallower decoder practice (Kim et al., 2019; Kasai et al., 2021) to be more efficient at auto-regressive decoding. The embedding dimension is 1, 024 and FFN hidden dimension is 4, 096. For the positional information encoding to the hidden state, we use Transformer with Untied Positional Encoding (TUPE) proposed in Ke et al. (2021) instead of the conventional sinusoidal positional embedding. Another design choice is the location of layer normalization. For the training stability, we use pre-layer normalization proposed in Xiong et al. (2020) instead of the original post-layer normalization from (Vaswani et al., 2017). We train MoE and dense models for the comparison. The model architecture choices mentioned here are common for both models. The only difference between dense and MoE models is the number of experts. We use 32 experts for the MoE model trained with the larger web data. We use beam search decoding with beam size of 5. For the experiments with smaller dataset, we use 12 transformer encoder layers and 6 transformer decoder layers. The embedding dimension is 768 and FFN hidden dimension is 3, 072. In this setting, we use MoE layers with 128 experts at every other layer.
MoE architecture. For the MoE model specific settings, we use top-1 learned gating from Fedus et al. (2021) and use an MoE layer at every other layer which are even numbered layers (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021). During the training of MoE models, we use jittering noise and balancing loss (ratio of 0.01) suggested in Lepikhin et al. (2020); Fedus et al. (2021) to more uniformly distribute expert utilization. To prevent overfitting and better regularize the model, we use gating dropout (0.2) (Liu et al., 2022) as well.
4.2 MOQE PERFORMANCE RESULTS
We apply MoQE quantization recipe to an MoE model and compare the performance with several dense models in Table 2. This experiment is done on the larger web dataset. The baseline is a dense model trained on the same dataset as the MoE model. Throughput, memory size and sparsity are all measured with the fp16 precision model. As additional comparison points, the dense model is also
2https://github.com/mjpost/sacrebleu 3https://github.com/google/sentencepiece
quantized to 8-bit and 4-bit only on the even numbered FFN layers which is the best configuration for quantizing the dense model described in Appendix B. For the MoE model, various quantization settings ranging from 8-bit to 2-bit are measured together with the original fp16 performance. For 2- bit quantization, additional QAT is applied. Finally, we applied magnitude based pruning approach to the 2-bit quantized model to acquire a sparser model.
First of all, the MoE model achieves 2.87% improvement on the BLEU score while increasing the model size to 8.38X of the original dense model. When 4-bit post-training quantization is applied, it still maintains 2.11% higher BLEU score than the original dense model. And, it could achieve even faster speed than the dense model which is 2.7X speed-up from the fp16 MoE model. This also reduces the memory consumption significantly from 8.38X to 2.67X compared to the dense model. With 2-bit QAT, the MoE model can still maintain 1.88% higher quality than the original dense model, but the model size is now only 1.71X of the original dense model. Also, the matrices are sparse up to 79.1% of the values are zeros.
Figure 5 shows the sparsity distribution of different layers. The second linear layers after the nonlinear activation layers show higher sparsity compared to the first linear layers. Some layers could reach up to 85% sparsity. We include a further investigation of sparsity with magnitude based pruning approach in Appendix I.
4.3 ROBUSTNESS COMPARISON BETWEEN MOE AND DENSE MODELS
We compare robustness against low-bit quantization between MoE and dense models using the posttraining quantization without any QAT. For the dense model, quantization with different bits is applied to the even numbered FFN layers. Appendix B shows this is the best layer selection for the dense model. We use two different datasets to verify the proposed quantization method works in different model settings.
Figure 6 presents the experiment with the model trained with the larger dataset. It shows the average BLEU scores with different quantization precision for both MoE and dense models. The MoE model can maintain accuracy within -0.3 down to 3-bit and -1.82 for 2-bit. On the other hand, the dense model can preserve the accuracy only down to 4-bit, but starts to lose significant accuracy more than 2 BLEU scores when it goes down to 3-bits. In case of 2-bits, dense model loses most of capability by -42.96 BLEU scores. Table 9 shows the score differences by quantization for both MoE and dense models on 10 different language pairs translations.
Figure 7 presents the experiment with the model trained with the smaller dataset. In this setting, each individual expert is smaller, but there are 4 times more experts in one MoE layer. And, they are trained with smaller dataset, so they do not have equivalent knowledge as the previous model trained on the larger dataset. As can be seen in the Figure, the quantization performance shows a similar pattern. The MoE model preserves accuracy even when it is quantized to 2 or 3 bits. However, dense model quickly loses the performance when it is quantized down to lower than 4-bit. Again, the MoE model is much more robust to quantization than the dense model.
5 CONCLUSION AND FUTURE WORKS
This paper shows how much MoE models are robust to the low-bit quantization with various experiments. By analyzing component-wise sensitivity and various quantization design choices, we present an efficient and effective way to reduce the model size which results in 4.9X model size reduction. With an optimized runtime, 4-bit quantized model can run 2.71X faster than the fp16 model. We also show 2-bit quantization could achieve more than 79% sparsity in the expert weights. The results naturally bring interesting future research directions. The discovered robustness of expert layers can guide a better way to train MoE models. If we can better control the splitting of latent space, better MoE models can be acquired. Analyzing the interactions between expert FFN layers and the other common layers in the model could guide a way to build a composable model. Especially, as presented in Appendix E, we observe that quantization sometimes improves the accuracy on tasks in a specific situation. Another important direction will be studying how to accelerate sparse expert computation on modern hardware with software/hardware co-design. This will eventually make MoE models much more efficient in both training and inference.
A CHANNEL-WISE VS MATRIX-WISE QUANTIZATION
Scaling factors are calculated by the quantization algorithm and stored in half precision floatingpoint (fp16) numbers to dequantize the matrices with. These factors can be chosen on the channel scale or the whole matrix scale. As shown in figure 8, channel-wise quantization gives quite higher scores than tensor-wise especially for low precision. Additional parameters to store channel-wise scaling factors is small, because only one value is needed for a channel and less than 1% of total parameters in a matrix. Therefore, we use channel-wise quantization for all the quantization experiments.
B QUANTIZATION OF DIFFERENT LAYERS IN A DENSE MODEL
In the paper, we compare a dense model and an MoE model in terms of quantization robustness. To make a fair comparison, we consider quantizing only half of the dense transformer blocks’ FFNs, because we quantize expert weights only which exist only in every other block (even numbered). We compare three different configurations - (1) quantizing even numbered blocks’ FFNs only, (2) quantizing odd numbered blocks’ FFNs only and (3) quantizing all FFN layers. As can be seen in Figure B, quantizing even numbered blocks’ FFNs affects the accuracy the least, and quantizing all FFN layers give the worst result. Based on this experiment, we quantize only even numbered transformer blocks’ FFNs for the dense model in all the experiments and comparisons.
C SKEWNESS OF WEIGHT MATRICES IN MOE AND DENSE MODELS
In the analysis of model weight distribution in Section 3, we observe that dense models’ FFN layers tend to have more outliers than MoEs’ expert FFN layers. We measure the skewness of weight distribution of those in Table 3.
D ABSTRACTIVE SUMMARIZATION TASK PERFORMANCE
To validate the quantization performs well on a different task and a model, we evaluate a 10.1 B MoE (64 experts) model’s quantization performance on an abstractive summarization task called XSUM (Narayan et al., 2018). Table 4 shows that the MoE model performs well with low-bit quantizations such as 2-bits and 3-bits.
E BETTER GENERALIZATION WITH EXPERT QUANTIZATION
We observe an interesting phenomenon that quantization actually improves the score of evaluation on a different domain dataset. We trained an MoE model with 64 experts (10.1B) on 50 different language translations (98 English-centric language pairs). When we evaluate this model on a different domain subset 6 languages (German, Spanish, French, Italian, Dutch, English), the evaluation BLEU score increases until we quantize the experts down to 3-bits without any additional QAT or calibrations. With 3-bit quantization, the score increases more than 6.42% on non-English to English and 6.96% on English to the others. Especially, from English to Italian and from Italian to English scores increase more than 10% which quite significant. The results are summarized in Table 5. We are analyzing what could be the reason for this phenomenon, but we think this is related to how MoE models learn representations. MoE layers might learn very specific knowledge with its increased capacity, but the shared layers learn more generic knowledge. By blurring the representation from the MoE layers, the model becomes more general task capable. This is one of our future research areas.
F ADDITIONAL SPARSITY DISTRIBUTION
We include sparsity distribution across different layers in a dense model quantized with 4-bit. As can be seen in Figure 10, the sparsity is overall low. Second linear layers in FFN show slightly higher sparsity, but all of them are smaller than 30%.
G MACHINE TRANSLATION DATASET SUMMARY
Table 6 shows the number of parallel sentences used to train dense and MoE models. All languages have at least 300 million sentences and the differences in the number among languages are less than two times.
H QUANTIZATION AWARE TRAINING (QAT)
For the QAT with straight through estimator, we use the hyper-parameters as in Table 7. Figure 11 shows the validation loss curve of one training run with 2-bit expert quantization.
I MAGNITUDE PRUNING EXPERIMENTS
Inspired by the emerged sparsity in expert layers, we apply a simple magnitude based pruning to the MoE model we experiment with. We apply different threshold values from 0.00001 to 0.5. We make all the weight values less than the threshold to be zero. We apply 2 to 8 bit quantization together. Figure 12 shows how model performance varies with the achieved sparsity. Even with sparsity level of 90%, the model preserves a certain level of task capability. Compared to Gale et al. (2019), this shows much higher performance with the same sparsity. This could be another example showing the robustness of expert weights.
J DETAILED BLEU SCORE DIFFERENCES WITH QUANTIZATION APPLIED TO THE MODEL TRAINED ON PUBLIC WMT DATASET
Table 8 shows individual BLEU score changes with various quantization bits for MoE and dense models trained on public WMT dataset.
K DETAILED BLEU SCORE DIFFERENCES WITH QUANTIZATION APPLIED TO
Table 9 shows individual BLEU score changes with various quantization bits for MoE and dense models measured with the internal validation dataset. Table 10 shows the same model’s evaluation performance on two WMT public dataset. | 1. What is the focus of the paper regarding MoE models?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the presentation and explanations in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a quantization technique for MoE models. The authors first study the weight distributions of different layers (MoE layer, Dense layer) and found that the MoE layer has fewer weight outliers, which is why the authors claim that the MoE layers are more suited for quantization. The authors then studied two different quantization methods, i.e. linear quantization and log-scale quantization. The authors then show performance after quantization on different modules, and then propose a quantization recipe.
Strengths And Weaknesses
[+] The idea is straightforward enough
[+] The topic is timely
[-] The presentation is not clear
[-] The novelty is limited
Please refer to the next section for more details.
Clarity, Quality, Novelty And Reproducibility
The presentation is not clear: The term MoQE is present in the title, but I cannot find another "MoQE" until Section 4.2. It is not clear to me what exactly MoQE means until Section 3.3.
Section 3.3 talks about the recipe. However, a significant portion of details is missing. What is QAT? What is the post-training quantization? There is neither explanation nor reference on these terms.
Novelty is incremental: the authors only combine existing quantization techniques with MoE layers. There is no new technique proposed except a quantization recipe.
Some statements are not rigorous or lack explanation: for example, using jittering noise and balancing loss cannot uniformly distribute expert utilization (or at least need to show the patterns); it is not clear why symmetric quantization can give an advantage to quantize many weights near zero, since as long as the ''0'' bin covers many weights. |
ICLR | Title
Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness
Abstract
Large Mixture of Experts (MoE) models could achieve state-of-the-art quality on various language tasks, including machine translation task, thanks to the efficient model scaling capability with expert parallelism (Fedus et al., 2021). However, it has brought a fundamental issue of larger memory consumption at deployment time. Furthermore, this results in significant inference speed degradation at autoregressive decoding steps due to the increased memory transfers. In this paper, we propose Mixture of Quantized Experts (MoQE) which is a simple weight-only quantization method applying ultra low-bit down to 2-bit quantizations only to expert weights for mitigating the increased memory and latency issues of MoE models. We show that low-bit quantization together with the MoE architecture delivers a reliable model performance while reducing the memory size significantly even without any additional training. Especially, expert layers in MoE models are much more robust to the quantization than conventional feedforward networks (FFN) layers. In our comprehensive analysis, we show that MoE models with 2-bit and 80% sparse expert weights can deliver better model performance than the dense model trained on the same dataset. We present how quantization of different parts of models affects the performance with various experiments using a large MoE model (5.3 B). As a result of low-bit quantization, we show the model size can be reduced by 79.6% of the original half precision floating point (fp16) MoE model. This cuts down the model size of 5.3B parameters from 8.4x of the dense model to only 1.7x of the dense model after 2-bit quantization. It still preserves 1.88% higher accuracy than the dense model. Combined with an optimized GPU runtime implementation, it also achieves 2.7X speed-up which is even slightly faster than the FLOPs equivalent dense model.
1 INTRODUCTION
Large Language Models (LLMs) have shown their effectiveness on various language tasks by increasing the number of trainable parameters together with the framework of pre-training a model on a large scale data and using it to different downstream tasks (Devlin et al., 2018; Radford et al., 2018; Liu et al., 2019; Raffel et al., 2020). With the advancement of distributed large scale training methods (Shazeer et al., 2018; Rasley et al., 2020; Ren et al., 2021; Baines et al., 2021) and large scale data collection (Raffel et al., 2020; Hoffmann et al., 2022), the models get even larger and break state-of-the-art performance with the increased model capacity (Brown et al., 2020; Rae et al., 2021; Zoph et al., 2022; Zhang et al., 2022; Smith et al., 2022; Chowdhery et al., 2022). However, the cost of training these models increases whenever more parameters are added, and this may not be sustainable.
As a solution to address this issue, sparsely activated models (Shazeer et al., 2017) are more widely adopted and show significant efficiency improvements in terms of model size scaling while enabling up to trillions of parameters to be trained more efficiently and achieving better model accuracy (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021; Artetxe et al., 2021). Mixture-of-Experts (MoE) models are one type of sparsely activated models replacing a single layer in a model with a group of parallel layers which are called experts combined with a gate layer. For a given input, the gate layer selects a subset of the experts from the group, and use them for processing
the input. By limiting the number of subset layers for a given input to one or two, the theoretical FLOPs stays almost constant even if we add hundreds of parallel layers into the MoE group. Thus far, most studies have shown that it is effective to increase the capacity of the models by replacing feedforward networks (FFN) of Transformer (Vaswani et al., 2017) blocks with MoE layer consists of multiple FFN layers together with a gating network (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021; Artetxe et al., 2021). One of the most unique and critical components of MoE models is the gating network which decides how to conditionally select experts for each input, and there have been various studies to improve it to achieve a better training convergence ((Lewis et al., 2021; Roller et al., 2021; Zuo et al., 2021; Clark et al., 2022; Liu et al., 2022; Zhou et al., 2022) and they are well surveyed in Fedus et al. (2022).
In spite of the progress on the training of MoE models, there have been only a few handfuls of studies related to MoE model inference. Rajbhandari et al. (2022) designs a more efficient MoE architecture and distributed runtime to achieve 7.3X inference speed-up. Kudugunta et al. (2021) uses task specific information to reduce the size of the model at deployment time by only loading task specific experts. Kim et al. (2021) prunes some experts at deployment time to reduce the model size by trading-off model performance. Zoph et al. (2022) uses knowledge distillation technique to distill a large MoE model into a smaller dense model to reduce the memory consumption and improve the throughput. Even with all the proposed techniques, there has not been a solution to accelerate the inference of MoE models while maintaining the accuracy.
Quantization is a type of model acceleration and compression techniques by estimating a floating point number into a smaller precision number. There are various studies that show quantization is effective to accelerate neural network model inference (Rodriguez et al., 2018; Stock et al., 2019; Choukroun et al., 2019; Gholami et al., 2022). Especially, it has been known to be very effective in natural language generation such as machine translation ((Kim et al., 2019; Aji & Heafield, 2020; Fan et al., 2021)) and natural language understanding (Kim & Awadalla, 2020) tasks. However, there has not been an in-depth study about how quantization works with large MoE models.
Recently, Dettmers et al. (2022); Yao et al. (2022) have studied how quantization works on large scale language models. Dettmers et al. (2022) looks at outlier features in the activations of large language models, and proposes to decompose them while performing matrix multiplications. In our quantization method, this is not needed because it is a weight-only quantization and outliers in activations cannot affect the performance. And, the weights are dequantized back to fp16 while matrix multiplication is done. This also makes our approach not require a special low-bit instructions. And, we show that this can be applied to lower bits than 8-bit for large MoE models. ZeroQuant (Yao et al., 2022) presents a series of techniques including knowledge distillation (Kim & Rush, 2016) for achieving a higher quality quantization. Our focus is to exploit the intrinsic characteristics of MoE layers based on our investigation, and we show that a simple quantization algorithm can achieve significantly higher efficiency and maintain the quality at the same time.
Our contributions in this paper are as below.
• We present extensive studies about how applying low-bit (down to 2-bits) quantization to different layers of MoE transformer models affects the model accuracy together with comparisons to the corresponding dense model with the same embedding size.
• We show that expert weights are highly robust to the quantization, therefore they can be quantized to 3-bit without additional training or calibration data and to 2-bit with Quantization Aware Training (QAT) which results in 79.6% reduction in memory size. Combined with a runtime optimization, we show that the method boosts the inference speed significantly more than 2.7X faster. We leverage the memory bounded characteristic of auto-regressive decoders, so reduced memory bottleneck improves the overall efficiency even with additional dequantization steps in our procedure. Based on the observations, we propose a new framework named Mixture of Quantized Experts (MoQE) which is a simple weight-only quantization method only applied to MoE expert weights.
• Finally, we show an emerging sparsity of more than 80% in the expert weights to be zero from 2-bit quantization. The expert weight matrices are sparse and very low-precision at the same time, while still outperforming the dense counterpart trained on the same dataset.
2 BACKGROUND - CHALLENGES OF DEPLOYING MOE MODELS
In the widely used MoE architecture, even with a constant or only sub-linearly higher theoretical FLOPs by using top-1 or top-2 gating, the increased model size with additional experts has a serious negative impact on the inference performance in various aspects.
2.1 INCREASED MEMORY FOOTPRINT
First of all, due to the increased model size, the model requires much more accelerator memory. With modern accelerators like GPUs, the accelerator memory size is limited. So, more accelerators are required to handle 1 model which causes communication problem described next. Also, the model takes up more memory, so the batch size is limited to be small which prevents the optimal utilization of processing cores.
2.2 SLOWER INFERENCE SPEED
Increased communication overhead. In the distributed training and inference set-up for large scale models, it is natural to use many GPUs or accelerators for a single model. The model weights can be distributed across different accelerators with various techniques (Ren et al., 2021) and expert parallelism (Fedus et al., 2021). However, in Liu et al. (2022), it is shown that the communication overhead with expert parallelism at training time could take up to more than half of the entire end-to-end time depending on the number of GPUs and clusters. This could affect inference efficiency even more severely because inference usually needs fewer FLOPs numbers than training, and communication bottleneck will stand out more. Therefore, it is desirable to use as few numbers of accelerators as possible to avoid this overhead.
Memory bandwidth bottleneck with MoE layers. The increase in the model size not only causes communication overhead, but also brings a significant inference speed impact on the modern processor architectures. While performing beam search decoding, the size of activation (an individual token) is relatively small and the decoding operation is memory bandwidth bounded. This means transferring model weight matrices in a memory hierarchy is a huge bottleneck. With the increased number of experts, the burden of memory transfer increases even more, and directly impacts the inference speed.
Inference speed measurement. Table 1 shows an actual speed difference measured with dense and MoE models on an NVIDIA’s V100 GPU. Two models are encoder and decoder based on the transformer architecture (Vaswani et al., 2017), and have exactly the same model settings except for the number of experts. The speed measurements are done on the translation task from German to English using auto-regressive beam search with beam size of five. Both models are evaluated on the same PyTorch 1 with half-precision floating point (fp16). The MoE model uses top-1 gating which assigns only one expert for a given input token which provides the same theoretical FLOPs as the corresponding dense model (with the same embedding size). Due to the excessive memory transfer caused by the increased number of experts, the actual inference speed decreases by 60% of the original dense model’s speed as shown in the table.
To overcome these challenges, we focus on reducing the model size utilizing quantization. Especially, increased model size and latency are mostly from the expert FFN weights which contribute
1https://github.com/pytorch/pytorch
92.8 % of all weights in this specific model setting, so the FFN weights are our main target for the optimization. With an emerged sparsity in expert weights from the low-bit quantization, we also explore a further sparsification opportunity with a simple magnitude pruning technique.
3 QUANTIZATION METHODS FOR MOE LAYERS
There are multiple design choices to quantize model weights. In this section, we analyze the numerical characteristics of different layers in a large MoE model, and describe the decisions we have made to most effectively quantize the MoE layers.
3.1 NUMERICAL DISTRIBUTION OF MODEL WEIGHTS
While quantizing matrices, it is desired not to have outliers, but to have smoothly distributed numerical values. Outliers usually skew the range to be quantized and scaling factors get too large. Figure 1 shows weight distribution box plots of linear layers in the MoE model’s FFN blocks. Following the widely used practice, an MoE layer is in every other layer (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021). Even number layers {0, 2, ...} are expert FFN layers and odd number layers {1, 3, ...} are normal dense FFN layers. First, all of them are centered around zero. However, dense FFN layers have a much larger range than MoE FFN layers. This indicates that dense FFN layers have more outliers than MoE FFN layers. This phenomenon is more prevalent in the second linear layers sometimes reaching down to −8.0 which is shown in Figure 1b. Figure 2 shows example histograms of an expert FFN weight and a dense FFN weight. As can be seen in Figure 2b, the example dense FFN layer suffers from outliers seriously. However, expert FFN weights in Figure 2a show smooth distribution without any major outliers. We observe the similar pattern all across different layers and different experts. In Appendix C, we additionally include the statistics of overall layers. This statistical observation indicates that MoE FFN layers would be well suited for the quantization.
Based on the observation, the FFN weights are following a normal distribution with a mean value near zero. Therefore, we use symmetric quantization without needing to shift the zero point. Even for the dense FFN layers, the means and the standard deviations are around zero except for the outliers which can be seen in the box plot of Figure 1. This symmetric quantization also gives an advantage to quantize many weight values near center to zero which could result in a sparse model weight.
3.2 QUANTIZATION ALGORITHMS
3.2.1 QUANTIZATION TECHNIQUES
We try two quantization techniques, they are (i) linear quantization which is mapping quantized integer values and the original float value uniformly and (ii) log-based quantization from Aji & Heafield (2020) which maps integer and float ranges in a log scale. In both cases, we choose channelwise quantization over matrix-wise quantization based on the experiment in Appendix A.
Linear quantization with absolute maximum. The first technique is linear quantization which, given a matrix A and b bits, it encodes A as follows:
sj = 2×max(|A:,j |)
2b − 1 Q:,j = int(
A:,j sj )
where s is the scaling factor which can be chosen per channel as shown or per the whole tensor. At inference time, the quantized Q is dequantized back to A ′ with the scaling factor s as follows:
A ′
:,j = Q:,j × sj
Log-scale quantization. The second technique is log-scale quantization where 1 bit is kept for the sign and (b− 1) bits are used to encode the log-scaled values. Given a matrix A, the quantization formula is as follows:
P = sign(A)
T = clip( |A| s , 1, 21−2 b−1)
Q = ⌈log2( 2
3 T )⌉
where s can be chosen in two ways, either (i) the absolute maximum or (ii) the optimal value to minimize the mean squared error (MSE) between the quantized and original values which is described in Aji & Heafield (2020). We use the second algorithm which we observe a better accuracy with the quantization. At inference time, the quantized weight values are dequantized based on the formula as follows:
A ′ = P × s× 2Q
Comparison of quantization techniques. Figure 3 shows the comparison between two quantization techniques with low bits applied on expert FFN layers and dense FFN layers. For dense FFN layers, log-scale quantization performs slightly better, but both do not work well on 2-bit resulting in almost zero evaluation scores. For expert FFN layers, both techniques work similarly for 3 and 4 bits, but log-scale quantization loses the accuracy seriously with 2-bit. This is because there are only 4 bins for the integer values to quantize with 2-bit quantization and one of them is zero. Log-scale tries to split values near zero in a more fine-grained way, but this actually hurts the performance compared to having enough zeros with linear quantization. Based on this experiment, we use linear quantization for compressing MoE FFN layers.
3.2.2 ROBUSTNESS OF EXPERT LAYERS TO QUANTIZATION
To better understand how applying quantization on different parts of an MoE model affects the accuracy, we conduct a set of experiments with various quantization bits. We divide an MoE model
into four parts: (i) expert FFNs, (ii) dense FFN layers, (iii) self-attention layers and (iv) crossattention layers. Based on the observation that linear quantization works better with lower bits, we use it for this set of experiments.
Figure 4 shows evaluation BLEU scores when quantizing different parts of the MoE model. We observe that quantizing expert FFN layers to 2-bit does not seriously impact the overall model quality. However, quantizing other parts of the model into 2-bit hurts the output quality significantly. Quantized cross-attention and self-attention blocks still can maintain the quality with 3-bit quantization, but their performance gets impacted with 2-bit quantization. On the other hand, dense FFN layers get significant impact with lower bit quantization of 2-bit and 3-bit. With 3-bit quantization, the model score drops 23 % of original score, and 2-bit quantization on dense FFN layers gives almost zero score. We also include the same study on a dense model in Appendix B, the similar pattern with 2 and 3 bit quantization is observed.
3.3 MIXTURE OF QUANTIZED EXPERTS (MOQE)
Based on the experiments from the previous parts of this section, we propose a very simple, highly effective and accurate quantization recipe for MoE models.
• Apply weight-only quantization while keeping activations in fp16. • Quantize expert FFN layers only. • Use channel-wise and symmetric quantization. • Choose one of either two quantization methods depending on the quantization precision
1. (3-bit or higher bit): Directly quatize trained MoE models without additional calibration.
2. (2-bit): Fine-tune the model with Quantization Aware Training (QAT) which the describtion follows.
Sparse experts with quantization aware training QAT is a well-known method used to recover the accuracy loss from the quantization (Gholami et al., 2022). In our case, to quantize to 2-bit precision, we can continue training the model with the original training data while applying quantization only on the forward pass computation as presented in (Wu et al., 2020; Bengio
et al., 2013) for recovering the accuracy loss. As we use symmetric quantization with 2-bit, zero numerical value is always included. Due to the normal distribution of expert weights centered around zero, many weight values naturally turn into zeros. This procedure results in sparse expert matrices.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Task. We use multilingual machine translation task for our experiments with two different dataset which are 20 language directions and 10 language directions respectively. We also evaluate the proposed method on a different task presented in Appendix D. We use sacrebleu 2 on the detokenized output to measure the accuracy of the models. A single NVIDIA PCIE V100 running inside a docker container running Ubuntu 20.04 and CUDA 11.6 is used for all experiments, and all code is compiled with nvcc and gcc/g++ 9.3. We measure end-to-end runtime of the inference for the evaluation dataset.
Datasets. We use two different datasets described below. For the larger dataset setting, we use internally collected dataset consists of 6 different languages which are German (de), French (fr), Italian (it), Spanish (es), Dutch (nl) and English (en). They are crawled from web, and each language pair has at least several hundred million sentences. We use 128,000 sub-words vocabulary built with sentencepiece3 library. The number of training sentences is included in Appendix G. For the smaller dataset setting, we use WMT-10 benchmark dataset widely used for public benchmarks (Wang et al., 2020; Kim et al., 2021). There are 32.5 million sentence pairs for English-centric 20 language pairs including French (fr), Czech(cs), German (de), Finnish (fi), Latvian (lt), Estonian (et), Romanian (ro), Hindi (hi), Turkish(tr) and Gujarati (gu).
Model architecture. For all the experiments with large dataset, we use 24 transformer (Vaswani et al., 2017) encoder layers and 12 transformer decoder layers following the deeper encoder and shallower decoder practice (Kim et al., 2019; Kasai et al., 2021) to be more efficient at auto-regressive decoding. The embedding dimension is 1, 024 and FFN hidden dimension is 4, 096. For the positional information encoding to the hidden state, we use Transformer with Untied Positional Encoding (TUPE) proposed in Ke et al. (2021) instead of the conventional sinusoidal positional embedding. Another design choice is the location of layer normalization. For the training stability, we use pre-layer normalization proposed in Xiong et al. (2020) instead of the original post-layer normalization from (Vaswani et al., 2017). We train MoE and dense models for the comparison. The model architecture choices mentioned here are common for both models. The only difference between dense and MoE models is the number of experts. We use 32 experts for the MoE model trained with the larger web data. We use beam search decoding with beam size of 5. For the experiments with smaller dataset, we use 12 transformer encoder layers and 6 transformer decoder layers. The embedding dimension is 768 and FFN hidden dimension is 3, 072. In this setting, we use MoE layers with 128 experts at every other layer.
MoE architecture. For the MoE model specific settings, we use top-1 learned gating from Fedus et al. (2021) and use an MoE layer at every other layer which are even numbered layers (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021). During the training of MoE models, we use jittering noise and balancing loss (ratio of 0.01) suggested in Lepikhin et al. (2020); Fedus et al. (2021) to more uniformly distribute expert utilization. To prevent overfitting and better regularize the model, we use gating dropout (0.2) (Liu et al., 2022) as well.
4.2 MOQE PERFORMANCE RESULTS
We apply MoQE quantization recipe to an MoE model and compare the performance with several dense models in Table 2. This experiment is done on the larger web dataset. The baseline is a dense model trained on the same dataset as the MoE model. Throughput, memory size and sparsity are all measured with the fp16 precision model. As additional comparison points, the dense model is also
2https://github.com/mjpost/sacrebleu 3https://github.com/google/sentencepiece
quantized to 8-bit and 4-bit only on the even numbered FFN layers which is the best configuration for quantizing the dense model described in Appendix B. For the MoE model, various quantization settings ranging from 8-bit to 2-bit are measured together with the original fp16 performance. For 2- bit quantization, additional QAT is applied. Finally, we applied magnitude based pruning approach to the 2-bit quantized model to acquire a sparser model.
First of all, the MoE model achieves 2.87% improvement on the BLEU score while increasing the model size to 8.38X of the original dense model. When 4-bit post-training quantization is applied, it still maintains 2.11% higher BLEU score than the original dense model. And, it could achieve even faster speed than the dense model which is 2.7X speed-up from the fp16 MoE model. This also reduces the memory consumption significantly from 8.38X to 2.67X compared to the dense model. With 2-bit QAT, the MoE model can still maintain 1.88% higher quality than the original dense model, but the model size is now only 1.71X of the original dense model. Also, the matrices are sparse up to 79.1% of the values are zeros.
Figure 5 shows the sparsity distribution of different layers. The second linear layers after the nonlinear activation layers show higher sparsity compared to the first linear layers. Some layers could reach up to 85% sparsity. We include a further investigation of sparsity with magnitude based pruning approach in Appendix I.
4.3 ROBUSTNESS COMPARISON BETWEEN MOE AND DENSE MODELS
We compare robustness against low-bit quantization between MoE and dense models using the posttraining quantization without any QAT. For the dense model, quantization with different bits is applied to the even numbered FFN layers. Appendix B shows this is the best layer selection for the dense model. We use two different datasets to verify the proposed quantization method works in different model settings.
Figure 6 presents the experiment with the model trained with the larger dataset. It shows the average BLEU scores with different quantization precision for both MoE and dense models. The MoE model can maintain accuracy within -0.3 down to 3-bit and -1.82 for 2-bit. On the other hand, the dense model can preserve the accuracy only down to 4-bit, but starts to lose significant accuracy more than 2 BLEU scores when it goes down to 3-bits. In case of 2-bits, dense model loses most of capability by -42.96 BLEU scores. Table 9 shows the score differences by quantization for both MoE and dense models on 10 different language pairs translations.
Figure 7 presents the experiment with the model trained with the smaller dataset. In this setting, each individual expert is smaller, but there are 4 times more experts in one MoE layer. And, they are trained with smaller dataset, so they do not have equivalent knowledge as the previous model trained on the larger dataset. As can be seen in the Figure, the quantization performance shows a similar pattern. The MoE model preserves accuracy even when it is quantized to 2 or 3 bits. However, dense model quickly loses the performance when it is quantized down to lower than 4-bit. Again, the MoE model is much more robust to quantization than the dense model.
5 CONCLUSION AND FUTURE WORKS
This paper shows how much MoE models are robust to the low-bit quantization with various experiments. By analyzing component-wise sensitivity and various quantization design choices, we present an efficient and effective way to reduce the model size which results in 4.9X model size reduction. With an optimized runtime, 4-bit quantized model can run 2.71X faster than the fp16 model. We also show 2-bit quantization could achieve more than 79% sparsity in the expert weights. The results naturally bring interesting future research directions. The discovered robustness of expert layers can guide a better way to train MoE models. If we can better control the splitting of latent space, better MoE models can be acquired. Analyzing the interactions between expert FFN layers and the other common layers in the model could guide a way to build a composable model. Especially, as presented in Appendix E, we observe that quantization sometimes improves the accuracy on tasks in a specific situation. Another important direction will be studying how to accelerate sparse expert computation on modern hardware with software/hardware co-design. This will eventually make MoE models much more efficient in both training and inference.
A CHANNEL-WISE VS MATRIX-WISE QUANTIZATION
Scaling factors are calculated by the quantization algorithm and stored in half precision floatingpoint (fp16) numbers to dequantize the matrices with. These factors can be chosen on the channel scale or the whole matrix scale. As shown in figure 8, channel-wise quantization gives quite higher scores than tensor-wise especially for low precision. Additional parameters to store channel-wise scaling factors is small, because only one value is needed for a channel and less than 1% of total parameters in a matrix. Therefore, we use channel-wise quantization for all the quantization experiments.
B QUANTIZATION OF DIFFERENT LAYERS IN A DENSE MODEL
In the paper, we compare a dense model and an MoE model in terms of quantization robustness. To make a fair comparison, we consider quantizing only half of the dense transformer blocks’ FFNs, because we quantize expert weights only which exist only in every other block (even numbered). We compare three different configurations - (1) quantizing even numbered blocks’ FFNs only, (2) quantizing odd numbered blocks’ FFNs only and (3) quantizing all FFN layers. As can be seen in Figure B, quantizing even numbered blocks’ FFNs affects the accuracy the least, and quantizing all FFN layers give the worst result. Based on this experiment, we quantize only even numbered transformer blocks’ FFNs for the dense model in all the experiments and comparisons.
C SKEWNESS OF WEIGHT MATRICES IN MOE AND DENSE MODELS
In the analysis of model weight distribution in Section 3, we observe that dense models’ FFN layers tend to have more outliers than MoEs’ expert FFN layers. We measure the skewness of weight distribution of those in Table 3.
D ABSTRACTIVE SUMMARIZATION TASK PERFORMANCE
To validate the quantization performs well on a different task and a model, we evaluate a 10.1 B MoE (64 experts) model’s quantization performance on an abstractive summarization task called XSUM (Narayan et al., 2018). Table 4 shows that the MoE model performs well with low-bit quantizations such as 2-bits and 3-bits.
E BETTER GENERALIZATION WITH EXPERT QUANTIZATION
We observe an interesting phenomenon that quantization actually improves the score of evaluation on a different domain dataset. We trained an MoE model with 64 experts (10.1B) on 50 different language translations (98 English-centric language pairs). When we evaluate this model on a different domain subset 6 languages (German, Spanish, French, Italian, Dutch, English), the evaluation BLEU score increases until we quantize the experts down to 3-bits without any additional QAT or calibrations. With 3-bit quantization, the score increases more than 6.42% on non-English to English and 6.96% on English to the others. Especially, from English to Italian and from Italian to English scores increase more than 10% which quite significant. The results are summarized in Table 5. We are analyzing what could be the reason for this phenomenon, but we think this is related to how MoE models learn representations. MoE layers might learn very specific knowledge with its increased capacity, but the shared layers learn more generic knowledge. By blurring the representation from the MoE layers, the model becomes more general task capable. This is one of our future research areas.
F ADDITIONAL SPARSITY DISTRIBUTION
We include sparsity distribution across different layers in a dense model quantized with 4-bit. As can be seen in Figure 10, the sparsity is overall low. Second linear layers in FFN show slightly higher sparsity, but all of them are smaller than 30%.
G MACHINE TRANSLATION DATASET SUMMARY
Table 6 shows the number of parallel sentences used to train dense and MoE models. All languages have at least 300 million sentences and the differences in the number among languages are less than two times.
H QUANTIZATION AWARE TRAINING (QAT)
For the QAT with straight through estimator, we use the hyper-parameters as in Table 7. Figure 11 shows the validation loss curve of one training run with 2-bit expert quantization.
I MAGNITUDE PRUNING EXPERIMENTS
Inspired by the emerged sparsity in expert layers, we apply a simple magnitude based pruning to the MoE model we experiment with. We apply different threshold values from 0.00001 to 0.5. We make all the weight values less than the threshold to be zero. We apply 2 to 8 bit quantization together. Figure 12 shows how model performance varies with the achieved sparsity. Even with sparsity level of 90%, the model preserves a certain level of task capability. Compared to Gale et al. (2019), this shows much higher performance with the same sparsity. This could be another example showing the robustness of expert weights.
J DETAILED BLEU SCORE DIFFERENCES WITH QUANTIZATION APPLIED TO THE MODEL TRAINED ON PUBLIC WMT DATASET
Table 8 shows individual BLEU score changes with various quantization bits for MoE and dense models trained on public WMT dataset.
K DETAILED BLEU SCORE DIFFERENCES WITH QUANTIZATION APPLIED TO
Table 9 shows individual BLEU score changes with various quantization bits for MoE and dense models measured with the internal validation dataset. Table 10 shows the same model’s evaluation performance on two WMT public dataset. | 1. What are the key findings of the paper regarding the characteristics of Mixture of Experts (MoE) models when their expert layers are quantized?
2. What are the strengths and weaknesses of the paper, particularly in terms of technical innovation and experimental settings?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or suggestions regarding the limitations of the paper, such as the lack of deeper insights into improving quantization performance for MoE models? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigated the characteristics of Mixture of Experts (MoE) models when their expert layers are quantized. The authors revealed that the expert layers have more evenly distributed data than dense layers, and thus they are more robust to quantization. Motivated by this observation, the authors applied low-bit quantization to MoE's expert layers and achieved memory savings with slight accuracy degradation.
Strengths And Weaknesses
(Strength)
One of the first efforts to investigate the quantization impact on MoE models
(Weakness)
Although the observations are interesting, there is little technical innovation based on them. The proposed quantization schemes follow conventional quantization techniques.
The authors claimed the gain in throughput, but the experimental settings for measuring hardware speedup are not clear.
The evaluation of accuracy is quite limited (presented only one translation task).
Clarity, Quality, Novelty And Reproducibility
This paper mostly discussed the phenomena the authors observed after applying basic quantization techniques to MoE models. It would be more desirable to provide deeper insights on what can be further improved to innovate quantization performance for MoE models. |
ICLR | Title
Mixture of Quantized Experts (MoQE): Complementary Effect of Low-bit Quantization and Robustness
Abstract
Large Mixture of Experts (MoE) models could achieve state-of-the-art quality on various language tasks, including machine translation task, thanks to the efficient model scaling capability with expert parallelism (Fedus et al., 2021). However, it has brought a fundamental issue of larger memory consumption at deployment time. Furthermore, this results in significant inference speed degradation at autoregressive decoding steps due to the increased memory transfers. In this paper, we propose Mixture of Quantized Experts (MoQE) which is a simple weight-only quantization method applying ultra low-bit down to 2-bit quantizations only to expert weights for mitigating the increased memory and latency issues of MoE models. We show that low-bit quantization together with the MoE architecture delivers a reliable model performance while reducing the memory size significantly even without any additional training. Especially, expert layers in MoE models are much more robust to the quantization than conventional feedforward networks (FFN) layers. In our comprehensive analysis, we show that MoE models with 2-bit and 80% sparse expert weights can deliver better model performance than the dense model trained on the same dataset. We present how quantization of different parts of models affects the performance with various experiments using a large MoE model (5.3 B). As a result of low-bit quantization, we show the model size can be reduced by 79.6% of the original half precision floating point (fp16) MoE model. This cuts down the model size of 5.3B parameters from 8.4x of the dense model to only 1.7x of the dense model after 2-bit quantization. It still preserves 1.88% higher accuracy than the dense model. Combined with an optimized GPU runtime implementation, it also achieves 2.7X speed-up which is even slightly faster than the FLOPs equivalent dense model.
1 INTRODUCTION
Large Language Models (LLMs) have shown their effectiveness on various language tasks by increasing the number of trainable parameters together with the framework of pre-training a model on a large scale data and using it to different downstream tasks (Devlin et al., 2018; Radford et al., 2018; Liu et al., 2019; Raffel et al., 2020). With the advancement of distributed large scale training methods (Shazeer et al., 2018; Rasley et al., 2020; Ren et al., 2021; Baines et al., 2021) and large scale data collection (Raffel et al., 2020; Hoffmann et al., 2022), the models get even larger and break state-of-the-art performance with the increased model capacity (Brown et al., 2020; Rae et al., 2021; Zoph et al., 2022; Zhang et al., 2022; Smith et al., 2022; Chowdhery et al., 2022). However, the cost of training these models increases whenever more parameters are added, and this may not be sustainable.
As a solution to address this issue, sparsely activated models (Shazeer et al., 2017) are more widely adopted and show significant efficiency improvements in terms of model size scaling while enabling up to trillions of parameters to be trained more efficiently and achieving better model accuracy (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021; Artetxe et al., 2021). Mixture-of-Experts (MoE) models are one type of sparsely activated models replacing a single layer in a model with a group of parallel layers which are called experts combined with a gate layer. For a given input, the gate layer selects a subset of the experts from the group, and use them for processing
the input. By limiting the number of subset layers for a given input to one or two, the theoretical FLOPs stays almost constant even if we add hundreds of parallel layers into the MoE group. Thus far, most studies have shown that it is effective to increase the capacity of the models by replacing feedforward networks (FFN) of Transformer (Vaswani et al., 2017) blocks with MoE layer consists of multiple FFN layers together with a gating network (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021; Artetxe et al., 2021). One of the most unique and critical components of MoE models is the gating network which decides how to conditionally select experts for each input, and there have been various studies to improve it to achieve a better training convergence ((Lewis et al., 2021; Roller et al., 2021; Zuo et al., 2021; Clark et al., 2022; Liu et al., 2022; Zhou et al., 2022) and they are well surveyed in Fedus et al. (2022).
In spite of the progress on the training of MoE models, there have been only a few handfuls of studies related to MoE model inference. Rajbhandari et al. (2022) designs a more efficient MoE architecture and distributed runtime to achieve 7.3X inference speed-up. Kudugunta et al. (2021) uses task specific information to reduce the size of the model at deployment time by only loading task specific experts. Kim et al. (2021) prunes some experts at deployment time to reduce the model size by trading-off model performance. Zoph et al. (2022) uses knowledge distillation technique to distill a large MoE model into a smaller dense model to reduce the memory consumption and improve the throughput. Even with all the proposed techniques, there has not been a solution to accelerate the inference of MoE models while maintaining the accuracy.
Quantization is a type of model acceleration and compression techniques by estimating a floating point number into a smaller precision number. There are various studies that show quantization is effective to accelerate neural network model inference (Rodriguez et al., 2018; Stock et al., 2019; Choukroun et al., 2019; Gholami et al., 2022). Especially, it has been known to be very effective in natural language generation such as machine translation ((Kim et al., 2019; Aji & Heafield, 2020; Fan et al., 2021)) and natural language understanding (Kim & Awadalla, 2020) tasks. However, there has not been an in-depth study about how quantization works with large MoE models.
Recently, Dettmers et al. (2022); Yao et al. (2022) have studied how quantization works on large scale language models. Dettmers et al. (2022) looks at outlier features in the activations of large language models, and proposes to decompose them while performing matrix multiplications. In our quantization method, this is not needed because it is a weight-only quantization and outliers in activations cannot affect the performance. And, the weights are dequantized back to fp16 while matrix multiplication is done. This also makes our approach not require a special low-bit instructions. And, we show that this can be applied to lower bits than 8-bit for large MoE models. ZeroQuant (Yao et al., 2022) presents a series of techniques including knowledge distillation (Kim & Rush, 2016) for achieving a higher quality quantization. Our focus is to exploit the intrinsic characteristics of MoE layers based on our investigation, and we show that a simple quantization algorithm can achieve significantly higher efficiency and maintain the quality at the same time.
Our contributions in this paper are as below.
• We present extensive studies about how applying low-bit (down to 2-bits) quantization to different layers of MoE transformer models affects the model accuracy together with comparisons to the corresponding dense model with the same embedding size.
• We show that expert weights are highly robust to the quantization, therefore they can be quantized to 3-bit without additional training or calibration data and to 2-bit with Quantization Aware Training (QAT) which results in 79.6% reduction in memory size. Combined with a runtime optimization, we show that the method boosts the inference speed significantly more than 2.7X faster. We leverage the memory bounded characteristic of auto-regressive decoders, so reduced memory bottleneck improves the overall efficiency even with additional dequantization steps in our procedure. Based on the observations, we propose a new framework named Mixture of Quantized Experts (MoQE) which is a simple weight-only quantization method only applied to MoE expert weights.
• Finally, we show an emerging sparsity of more than 80% in the expert weights to be zero from 2-bit quantization. The expert weight matrices are sparse and very low-precision at the same time, while still outperforming the dense counterpart trained on the same dataset.
2 BACKGROUND - CHALLENGES OF DEPLOYING MOE MODELS
In the widely used MoE architecture, even with a constant or only sub-linearly higher theoretical FLOPs by using top-1 or top-2 gating, the increased model size with additional experts has a serious negative impact on the inference performance in various aspects.
2.1 INCREASED MEMORY FOOTPRINT
First of all, due to the increased model size, the model requires much more accelerator memory. With modern accelerators like GPUs, the accelerator memory size is limited. So, more accelerators are required to handle 1 model which causes communication problem described next. Also, the model takes up more memory, so the batch size is limited to be small which prevents the optimal utilization of processing cores.
2.2 SLOWER INFERENCE SPEED
Increased communication overhead. In the distributed training and inference set-up for large scale models, it is natural to use many GPUs or accelerators for a single model. The model weights can be distributed across different accelerators with various techniques (Ren et al., 2021) and expert parallelism (Fedus et al., 2021). However, in Liu et al. (2022), it is shown that the communication overhead with expert parallelism at training time could take up to more than half of the entire end-to-end time depending on the number of GPUs and clusters. This could affect inference efficiency even more severely because inference usually needs fewer FLOPs numbers than training, and communication bottleneck will stand out more. Therefore, it is desirable to use as few numbers of accelerators as possible to avoid this overhead.
Memory bandwidth bottleneck with MoE layers. The increase in the model size not only causes communication overhead, but also brings a significant inference speed impact on the modern processor architectures. While performing beam search decoding, the size of activation (an individual token) is relatively small and the decoding operation is memory bandwidth bounded. This means transferring model weight matrices in a memory hierarchy is a huge bottleneck. With the increased number of experts, the burden of memory transfer increases even more, and directly impacts the inference speed.
Inference speed measurement. Table 1 shows an actual speed difference measured with dense and MoE models on an NVIDIA’s V100 GPU. Two models are encoder and decoder based on the transformer architecture (Vaswani et al., 2017), and have exactly the same model settings except for the number of experts. The speed measurements are done on the translation task from German to English using auto-regressive beam search with beam size of five. Both models are evaluated on the same PyTorch 1 with half-precision floating point (fp16). The MoE model uses top-1 gating which assigns only one expert for a given input token which provides the same theoretical FLOPs as the corresponding dense model (with the same embedding size). Due to the excessive memory transfer caused by the increased number of experts, the actual inference speed decreases by 60% of the original dense model’s speed as shown in the table.
To overcome these challenges, we focus on reducing the model size utilizing quantization. Especially, increased model size and latency are mostly from the expert FFN weights which contribute
1https://github.com/pytorch/pytorch
92.8 % of all weights in this specific model setting, so the FFN weights are our main target for the optimization. With an emerged sparsity in expert weights from the low-bit quantization, we also explore a further sparsification opportunity with a simple magnitude pruning technique.
3 QUANTIZATION METHODS FOR MOE LAYERS
There are multiple design choices to quantize model weights. In this section, we analyze the numerical characteristics of different layers in a large MoE model, and describe the decisions we have made to most effectively quantize the MoE layers.
3.1 NUMERICAL DISTRIBUTION OF MODEL WEIGHTS
While quantizing matrices, it is desired not to have outliers, but to have smoothly distributed numerical values. Outliers usually skew the range to be quantized and scaling factors get too large. Figure 1 shows weight distribution box plots of linear layers in the MoE model’s FFN blocks. Following the widely used practice, an MoE layer is in every other layer (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021). Even number layers {0, 2, ...} are expert FFN layers and odd number layers {1, 3, ...} are normal dense FFN layers. First, all of them are centered around zero. However, dense FFN layers have a much larger range than MoE FFN layers. This indicates that dense FFN layers have more outliers than MoE FFN layers. This phenomenon is more prevalent in the second linear layers sometimes reaching down to −8.0 which is shown in Figure 1b. Figure 2 shows example histograms of an expert FFN weight and a dense FFN weight. As can be seen in Figure 2b, the example dense FFN layer suffers from outliers seriously. However, expert FFN weights in Figure 2a show smooth distribution without any major outliers. We observe the similar pattern all across different layers and different experts. In Appendix C, we additionally include the statistics of overall layers. This statistical observation indicates that MoE FFN layers would be well suited for the quantization.
Based on the observation, the FFN weights are following a normal distribution with a mean value near zero. Therefore, we use symmetric quantization without needing to shift the zero point. Even for the dense FFN layers, the means and the standard deviations are around zero except for the outliers which can be seen in the box plot of Figure 1. This symmetric quantization also gives an advantage to quantize many weight values near center to zero which could result in a sparse model weight.
3.2 QUANTIZATION ALGORITHMS
3.2.1 QUANTIZATION TECHNIQUES
We try two quantization techniques, they are (i) linear quantization which is mapping quantized integer values and the original float value uniformly and (ii) log-based quantization from Aji & Heafield (2020) which maps integer and float ranges in a log scale. In both cases, we choose channelwise quantization over matrix-wise quantization based on the experiment in Appendix A.
Linear quantization with absolute maximum. The first technique is linear quantization which, given a matrix A and b bits, it encodes A as follows:
sj = 2×max(|A:,j |)
2b − 1 Q:,j = int(
A:,j sj )
where s is the scaling factor which can be chosen per channel as shown or per the whole tensor. At inference time, the quantized Q is dequantized back to A ′ with the scaling factor s as follows:
A ′
:,j = Q:,j × sj
Log-scale quantization. The second technique is log-scale quantization where 1 bit is kept for the sign and (b− 1) bits are used to encode the log-scaled values. Given a matrix A, the quantization formula is as follows:
P = sign(A)
T = clip( |A| s , 1, 21−2 b−1)
Q = ⌈log2( 2
3 T )⌉
where s can be chosen in two ways, either (i) the absolute maximum or (ii) the optimal value to minimize the mean squared error (MSE) between the quantized and original values which is described in Aji & Heafield (2020). We use the second algorithm which we observe a better accuracy with the quantization. At inference time, the quantized weight values are dequantized based on the formula as follows:
A ′ = P × s× 2Q
Comparison of quantization techniques. Figure 3 shows the comparison between two quantization techniques with low bits applied on expert FFN layers and dense FFN layers. For dense FFN layers, log-scale quantization performs slightly better, but both do not work well on 2-bit resulting in almost zero evaluation scores. For expert FFN layers, both techniques work similarly for 3 and 4 bits, but log-scale quantization loses the accuracy seriously with 2-bit. This is because there are only 4 bins for the integer values to quantize with 2-bit quantization and one of them is zero. Log-scale tries to split values near zero in a more fine-grained way, but this actually hurts the performance compared to having enough zeros with linear quantization. Based on this experiment, we use linear quantization for compressing MoE FFN layers.
3.2.2 ROBUSTNESS OF EXPERT LAYERS TO QUANTIZATION
To better understand how applying quantization on different parts of an MoE model affects the accuracy, we conduct a set of experiments with various quantization bits. We divide an MoE model
into four parts: (i) expert FFNs, (ii) dense FFN layers, (iii) self-attention layers and (iv) crossattention layers. Based on the observation that linear quantization works better with lower bits, we use it for this set of experiments.
Figure 4 shows evaluation BLEU scores when quantizing different parts of the MoE model. We observe that quantizing expert FFN layers to 2-bit does not seriously impact the overall model quality. However, quantizing other parts of the model into 2-bit hurts the output quality significantly. Quantized cross-attention and self-attention blocks still can maintain the quality with 3-bit quantization, but their performance gets impacted with 2-bit quantization. On the other hand, dense FFN layers get significant impact with lower bit quantization of 2-bit and 3-bit. With 3-bit quantization, the model score drops 23 % of original score, and 2-bit quantization on dense FFN layers gives almost zero score. We also include the same study on a dense model in Appendix B, the similar pattern with 2 and 3 bit quantization is observed.
3.3 MIXTURE OF QUANTIZED EXPERTS (MOQE)
Based on the experiments from the previous parts of this section, we propose a very simple, highly effective and accurate quantization recipe for MoE models.
• Apply weight-only quantization while keeping activations in fp16. • Quantize expert FFN layers only. • Use channel-wise and symmetric quantization. • Choose one of either two quantization methods depending on the quantization precision
1. (3-bit or higher bit): Directly quatize trained MoE models without additional calibration.
2. (2-bit): Fine-tune the model with Quantization Aware Training (QAT) which the describtion follows.
Sparse experts with quantization aware training QAT is a well-known method used to recover the accuracy loss from the quantization (Gholami et al., 2022). In our case, to quantize to 2-bit precision, we can continue training the model with the original training data while applying quantization only on the forward pass computation as presented in (Wu et al., 2020; Bengio
et al., 2013) for recovering the accuracy loss. As we use symmetric quantization with 2-bit, zero numerical value is always included. Due to the normal distribution of expert weights centered around zero, many weight values naturally turn into zeros. This procedure results in sparse expert matrices.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Task. We use multilingual machine translation task for our experiments with two different dataset which are 20 language directions and 10 language directions respectively. We also evaluate the proposed method on a different task presented in Appendix D. We use sacrebleu 2 on the detokenized output to measure the accuracy of the models. A single NVIDIA PCIE V100 running inside a docker container running Ubuntu 20.04 and CUDA 11.6 is used for all experiments, and all code is compiled with nvcc and gcc/g++ 9.3. We measure end-to-end runtime of the inference for the evaluation dataset.
Datasets. We use two different datasets described below. For the larger dataset setting, we use internally collected dataset consists of 6 different languages which are German (de), French (fr), Italian (it), Spanish (es), Dutch (nl) and English (en). They are crawled from web, and each language pair has at least several hundred million sentences. We use 128,000 sub-words vocabulary built with sentencepiece3 library. The number of training sentences is included in Appendix G. For the smaller dataset setting, we use WMT-10 benchmark dataset widely used for public benchmarks (Wang et al., 2020; Kim et al., 2021). There are 32.5 million sentence pairs for English-centric 20 language pairs including French (fr), Czech(cs), German (de), Finnish (fi), Latvian (lt), Estonian (et), Romanian (ro), Hindi (hi), Turkish(tr) and Gujarati (gu).
Model architecture. For all the experiments with large dataset, we use 24 transformer (Vaswani et al., 2017) encoder layers and 12 transformer decoder layers following the deeper encoder and shallower decoder practice (Kim et al., 2019; Kasai et al., 2021) to be more efficient at auto-regressive decoding. The embedding dimension is 1, 024 and FFN hidden dimension is 4, 096. For the positional information encoding to the hidden state, we use Transformer with Untied Positional Encoding (TUPE) proposed in Ke et al. (2021) instead of the conventional sinusoidal positional embedding. Another design choice is the location of layer normalization. For the training stability, we use pre-layer normalization proposed in Xiong et al. (2020) instead of the original post-layer normalization from (Vaswani et al., 2017). We train MoE and dense models for the comparison. The model architecture choices mentioned here are common for both models. The only difference between dense and MoE models is the number of experts. We use 32 experts for the MoE model trained with the larger web data. We use beam search decoding with beam size of 5. For the experiments with smaller dataset, we use 12 transformer encoder layers and 6 transformer decoder layers. The embedding dimension is 768 and FFN hidden dimension is 3, 072. In this setting, we use MoE layers with 128 experts at every other layer.
MoE architecture. For the MoE model specific settings, we use top-1 learned gating from Fedus et al. (2021) and use an MoE layer at every other layer which are even numbered layers (Lepikhin et al., 2020; Fedus et al., 2021; Kim et al., 2021). During the training of MoE models, we use jittering noise and balancing loss (ratio of 0.01) suggested in Lepikhin et al. (2020); Fedus et al. (2021) to more uniformly distribute expert utilization. To prevent overfitting and better regularize the model, we use gating dropout (0.2) (Liu et al., 2022) as well.
4.2 MOQE PERFORMANCE RESULTS
We apply MoQE quantization recipe to an MoE model and compare the performance with several dense models in Table 2. This experiment is done on the larger web dataset. The baseline is a dense model trained on the same dataset as the MoE model. Throughput, memory size and sparsity are all measured with the fp16 precision model. As additional comparison points, the dense model is also
2https://github.com/mjpost/sacrebleu 3https://github.com/google/sentencepiece
quantized to 8-bit and 4-bit only on the even numbered FFN layers which is the best configuration for quantizing the dense model described in Appendix B. For the MoE model, various quantization settings ranging from 8-bit to 2-bit are measured together with the original fp16 performance. For 2- bit quantization, additional QAT is applied. Finally, we applied magnitude based pruning approach to the 2-bit quantized model to acquire a sparser model.
First of all, the MoE model achieves 2.87% improvement on the BLEU score while increasing the model size to 8.38X of the original dense model. When 4-bit post-training quantization is applied, it still maintains 2.11% higher BLEU score than the original dense model. And, it could achieve even faster speed than the dense model which is 2.7X speed-up from the fp16 MoE model. This also reduces the memory consumption significantly from 8.38X to 2.67X compared to the dense model. With 2-bit QAT, the MoE model can still maintain 1.88% higher quality than the original dense model, but the model size is now only 1.71X of the original dense model. Also, the matrices are sparse up to 79.1% of the values are zeros.
Figure 5 shows the sparsity distribution of different layers. The second linear layers after the nonlinear activation layers show higher sparsity compared to the first linear layers. Some layers could reach up to 85% sparsity. We include a further investigation of sparsity with magnitude based pruning approach in Appendix I.
4.3 ROBUSTNESS COMPARISON BETWEEN MOE AND DENSE MODELS
We compare robustness against low-bit quantization between MoE and dense models using the posttraining quantization without any QAT. For the dense model, quantization with different bits is applied to the even numbered FFN layers. Appendix B shows this is the best layer selection for the dense model. We use two different datasets to verify the proposed quantization method works in different model settings.
Figure 6 presents the experiment with the model trained with the larger dataset. It shows the average BLEU scores with different quantization precision for both MoE and dense models. The MoE model can maintain accuracy within -0.3 down to 3-bit and -1.82 for 2-bit. On the other hand, the dense model can preserve the accuracy only down to 4-bit, but starts to lose significant accuracy more than 2 BLEU scores when it goes down to 3-bits. In case of 2-bits, dense model loses most of capability by -42.96 BLEU scores. Table 9 shows the score differences by quantization for both MoE and dense models on 10 different language pairs translations.
Figure 7 presents the experiment with the model trained with the smaller dataset. In this setting, each individual expert is smaller, but there are 4 times more experts in one MoE layer. And, they are trained with smaller dataset, so they do not have equivalent knowledge as the previous model trained on the larger dataset. As can be seen in the Figure, the quantization performance shows a similar pattern. The MoE model preserves accuracy even when it is quantized to 2 or 3 bits. However, dense model quickly loses the performance when it is quantized down to lower than 4-bit. Again, the MoE model is much more robust to quantization than the dense model.
5 CONCLUSION AND FUTURE WORKS
This paper shows how much MoE models are robust to the low-bit quantization with various experiments. By analyzing component-wise sensitivity and various quantization design choices, we present an efficient and effective way to reduce the model size which results in 4.9X model size reduction. With an optimized runtime, 4-bit quantized model can run 2.71X faster than the fp16 model. We also show 2-bit quantization could achieve more than 79% sparsity in the expert weights. The results naturally bring interesting future research directions. The discovered robustness of expert layers can guide a better way to train MoE models. If we can better control the splitting of latent space, better MoE models can be acquired. Analyzing the interactions between expert FFN layers and the other common layers in the model could guide a way to build a composable model. Especially, as presented in Appendix E, we observe that quantization sometimes improves the accuracy on tasks in a specific situation. Another important direction will be studying how to accelerate sparse expert computation on modern hardware with software/hardware co-design. This will eventually make MoE models much more efficient in both training and inference.
A CHANNEL-WISE VS MATRIX-WISE QUANTIZATION
Scaling factors are calculated by the quantization algorithm and stored in half precision floatingpoint (fp16) numbers to dequantize the matrices with. These factors can be chosen on the channel scale or the whole matrix scale. As shown in figure 8, channel-wise quantization gives quite higher scores than tensor-wise especially for low precision. Additional parameters to store channel-wise scaling factors is small, because only one value is needed for a channel and less than 1% of total parameters in a matrix. Therefore, we use channel-wise quantization for all the quantization experiments.
B QUANTIZATION OF DIFFERENT LAYERS IN A DENSE MODEL
In the paper, we compare a dense model and an MoE model in terms of quantization robustness. To make a fair comparison, we consider quantizing only half of the dense transformer blocks’ FFNs, because we quantize expert weights only which exist only in every other block (even numbered). We compare three different configurations - (1) quantizing even numbered blocks’ FFNs only, (2) quantizing odd numbered blocks’ FFNs only and (3) quantizing all FFN layers. As can be seen in Figure B, quantizing even numbered blocks’ FFNs affects the accuracy the least, and quantizing all FFN layers give the worst result. Based on this experiment, we quantize only even numbered transformer blocks’ FFNs for the dense model in all the experiments and comparisons.
C SKEWNESS OF WEIGHT MATRICES IN MOE AND DENSE MODELS
In the analysis of model weight distribution in Section 3, we observe that dense models’ FFN layers tend to have more outliers than MoEs’ expert FFN layers. We measure the skewness of weight distribution of those in Table 3.
D ABSTRACTIVE SUMMARIZATION TASK PERFORMANCE
To validate the quantization performs well on a different task and a model, we evaluate a 10.1 B MoE (64 experts) model’s quantization performance on an abstractive summarization task called XSUM (Narayan et al., 2018). Table 4 shows that the MoE model performs well with low-bit quantizations such as 2-bits and 3-bits.
E BETTER GENERALIZATION WITH EXPERT QUANTIZATION
We observe an interesting phenomenon that quantization actually improves the score of evaluation on a different domain dataset. We trained an MoE model with 64 experts (10.1B) on 50 different language translations (98 English-centric language pairs). When we evaluate this model on a different domain subset 6 languages (German, Spanish, French, Italian, Dutch, English), the evaluation BLEU score increases until we quantize the experts down to 3-bits without any additional QAT or calibrations. With 3-bit quantization, the score increases more than 6.42% on non-English to English and 6.96% on English to the others. Especially, from English to Italian and from Italian to English scores increase more than 10% which quite significant. The results are summarized in Table 5. We are analyzing what could be the reason for this phenomenon, but we think this is related to how MoE models learn representations. MoE layers might learn very specific knowledge with its increased capacity, but the shared layers learn more generic knowledge. By blurring the representation from the MoE layers, the model becomes more general task capable. This is one of our future research areas.
F ADDITIONAL SPARSITY DISTRIBUTION
We include sparsity distribution across different layers in a dense model quantized with 4-bit. As can be seen in Figure 10, the sparsity is overall low. Second linear layers in FFN show slightly higher sparsity, but all of them are smaller than 30%.
G MACHINE TRANSLATION DATASET SUMMARY
Table 6 shows the number of parallel sentences used to train dense and MoE models. All languages have at least 300 million sentences and the differences in the number among languages are less than two times.
H QUANTIZATION AWARE TRAINING (QAT)
For the QAT with straight through estimator, we use the hyper-parameters as in Table 7. Figure 11 shows the validation loss curve of one training run with 2-bit expert quantization.
I MAGNITUDE PRUNING EXPERIMENTS
Inspired by the emerged sparsity in expert layers, we apply a simple magnitude based pruning to the MoE model we experiment with. We apply different threshold values from 0.00001 to 0.5. We make all the weight values less than the threshold to be zero. We apply 2 to 8 bit quantization together. Figure 12 shows how model performance varies with the achieved sparsity. Even with sparsity level of 90%, the model preserves a certain level of task capability. Compared to Gale et al. (2019), this shows much higher performance with the same sparsity. This could be another example showing the robustness of expert weights.
J DETAILED BLEU SCORE DIFFERENCES WITH QUANTIZATION APPLIED TO THE MODEL TRAINED ON PUBLIC WMT DATASET
Table 8 shows individual BLEU score changes with various quantization bits for MoE and dense models trained on public WMT dataset.
K DETAILED BLEU SCORE DIFFERENCES WITH QUANTIZATION APPLIED TO
Table 9 shows individual BLEU score changes with various quantization bits for MoE and dense models measured with the internal validation dataset. Table 10 shows the same model’s evaluation performance on two WMT public dataset. | 1. What is the focus of the paper regarding Mixture of Experts transformer models?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of aggressive quantization and sparsity?
3. Do you have any concerns or suggestions regarding the presentation and organization of the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or issues that the reviewer raises regarding the experimental results and comparisons with other works? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents an investigation of Mixture of Experts (MoE) transformer models with aggressive quantization (2-4 bits) of the Feed Forward Networks weights. 2-bit quantization of these layers limits model size increase due to MoE usage to 1.7x of the original model (instead of 8.4x), while retaining some of the higher-precision MoE accuracy gains (+1.88% instead of +2.87%). 2-bit quantization also results in significant sparsity of the FNN layers (~80%).
Strengths And Weaknesses
Strengths:
paper convincingly demonstrates that aggressive quantization of the expert FNN retains significant accuracy improvement while limiting the increase in model size
ablation studies give valuable information on where quantization is best applied (expert FNN)
observation of high sparsity resulting from aggressive quantization may also be leveraged by future studies
Weaknesses:
primarily an observational study which applies known quantization techniques to known architectures
a trade-off remains between model size and accuracy improvement (i.e., the MoE model is still larger than corresponding dense models, especially when compared to the quantized versions)
only weights are quantized, not activations. Consequently, inference speedup is necessarily limited
several throughput values on Table 2 are missing for both the dense and MoE model. What's the reason?
as a result of the authors attempting to emphasize their best results, some statements can be misleading. For example, the conclusions read "We also show 2-bit quantization could achieve more than 80% sparsity in the expert weights" (emphasis mine) which refers to the 2-bit pruned results which comes at the expense of losing most MoE gains in accuracy. I would recommend to tune down this and similar statements
Clarity, Quality, Novelty And Reproducibility
Novelty is limited as existing techniques and architectures are leveraged. However, the set of experiment performed and the resulting observations are new and interesting.
Readability is fine but the manuscript would benefit from extensive polishing and proofreading. Some recommendations:
section 2: fix "the model requires much more memory to load the model"
section 2: fix "the actual inference speed decreases slower than 40%"
section 2 lists 4 challenges to MoE deployment. It seems to me the challenges are two (or three): memory footprint and slower training/inference. The latter is caused by communication overhead and bandwidth bottleneck. I recommend reorganizing this section
paper structure could also be improved:
section 3 describes quantization of specific layers and discusses results before introducing the model architectures (Section 4)
table 1 shows MoE weight % for an unspecified "specific model setting", which is later introduced in Section 4
section 3: weight distributions in fig 2a,b could be presented on a semi-log scale to highlight the outliers (maybe combining the two figures into one)
section 3.3 mentioned QAT for the first time and introduces it in the process flow for 2 bits quantization based on "the experiments from the previous parts of this section". Are results shown in Fig. 4 obtained with PTQ? This is never mentioned.
multiple figure labels read "qunatization bits" |
ICLR | Title
FEED: Feature-level Ensemble Effect for knowledge Distillation
Abstract
This paper proposes a versatile and powerful training algorithm named Featurelevel Ensemble Effect for knowledge Distillation (FEED), which is inspired by the work of factor transfer. The factor transfer is one of the knowledge transfer methods that improves the performance of a student network with a strong teacher network. It transfers the knowledge of a teacher in the feature map level using high-capacity teacher network, and our training algorithm FEED is an extension of it. FEED aims to transfer ensemble knowledge, using either multiple teacher in parallel or multiple training sequences. Adapting peer-teaching framework, we introduce a couple of training algorithms that transfer ensemble knowledge to the student at the feature map level, both of which help the student network find more generalized solutions in the parameter space. Experimental results on CIFAR-100 and ImageNet show that our method, FEED, has clear performance enhancements, without introducing any additional parameters or computations at test time.
1 INTRODUCTION
Recent successes of CNNs have led to the use of deep learning in real-world applications. In order to manipulate these deep learning models, people are asking deep CNNs to use multi-class datasets to find manifolds separating different classes. To meet this need, deep and parameter-rich networks have emerged that have the power to find manifolds for large numbers of classes. However, these deep CNNs suffer from the problem of overfitting due to their great depth and complexity, which results in the drop of performance at the test time. In fact, even a small ResNet applied for a dataset such as CIFAR-100 (Krizhevsky et al.) will not have room to learn more because the train losses converge, whereas the test accuracy is significantly lower. This phenomena have led to the need of learning DNN models with appropriate regularization to allow them to generalize better. In fact, regularizing a model to achieve high performance for new inputs is a technique that has been used since the era of early machine learning.
Among them, model ensemble (Dietterich, 2000) is one of the popular regularization methods (Goodfellow et al., 2016), which has been used as a way of alleviating the problem of overfitting in a single model. But it has drawbacks in that it requires multiple models and inputs should be fed to each of them at test time. For a solution of this problem, Hinton et al. (2015) proposed Knowledge Distillation (KD) which trains a student network using soft labels from ensemble models or a high-capacity model. They obtained meaningful results in speech recognition dataset and this work brought advent of knowledge transfer, an area in the representation learning that aims performance improvements by training a weak student network by giving various forms of knowledge of expert teacher networks (Huang & Wang, 2017). It is also categorized as one family of model compression (Kim et al., 2018), since it helps the student network achieve higher accuracy with fixed number of parameters given.
The recent knowledge transfer algorithms can be approximately categorized in two ways. The first way is whether to use an ensemble model as a teacher or a single high-capacity model as a teacher. The second is whether to transfer the teacher’s prediction or the information from feature maps. Whereas the methods that use teacher’s prediction can use both types of teachers, to the best of our knowledge, methods using feature-map-level information only use a single high-capacity model as a teacher. For example, studies of Factor Transfer (FT) (Kim et al., 2018), Attention Transfer (AT) (Zagoruyko & Komodakis, 2016a), and Neuron Selectivity Transfer (NST) (Huang & Wang, 2017) set
the student network as a shallow network with a small-sized parameters, and set the teacher network as a deeper and more powerful network instead of an ensemble expert. One of the drawbacks of methods with a high-capacity teacher model is that high-capacity model may be hard to obtain (Lan et al., 2018).
On the contrary, the ones which use an ensemble of networks can transfer ensemble knowledge and also have an advantage of peer-teaching framework (Hinton et al., 2015; Lan et al., 2018; Furlanello et al., 2018). Also, Zhang et al. (2017) showed that using one same type of network for transferring output-level knowledge can improve performance of a network. However, the methods that delivers knowledge at the feature-map level also has advantages that they can give more specific information to the student compared to the methods that only rely on the output predictions of the teacher.
To make full advantages of both ensemble teacher method and feature map methods, we came up with a new framework that delivers knowledge of multiple networks at the feature-map level. In this paper, we train a new student network using the same type of teacher networks using the scheme of FT, a decent knowledge transfer algorithm that transfers knowledge using auto-encoding factors. We utilized the translator of FT to transfer ensemble knowledge, introducing two different kinds of ensemble-effective training algorithms:
• We recursively set the trained student network as a new teacher network which helps training a new student, accumulating knowledge in ensemble at the feature map level.
• We transfer knowledge to the student from multiple teachers simultaneously, expecting the student network to learn ensemble knowledge at the feature map level.
The paper is organized as follows. First, we briefly explain the related works including the method FT. Then a couple of more advanced versions of FT are proposed. Next, we verify our proposed methods with experiments. Experimental results from our proposed training methods are compared with the case of KD on CIFAR-100 and ImageNet datasets. Finally, we compare our method with other kinds of recent knowledge transfer algorithms.
2 RELATED WORKS
Many researchers studied the ways to train models other than using a purely supervised loss. In early times of these studies, Model Compression (Bucila et al., 2006) studied the ways to compress information from ensemble models in one network. More recently, Hinton et al. (2015) proposed Knowledge Distillation (KD), which uses softened softmax labels from teacher networks when training the student network, and motivated many researchers to develop many variants of it to various domains. The paper has been applied to transfer learning, and also led the advent of knowledge transfer with applications to many domains.
FitNet (Romero et al., 2014) applied KD to train a student network with different capacity to find better trade off between its accuracy and time cost. Attention Transfer (Zagoruyko & Komodakis, 2016a) tried to transfer the attention map of the teacher network to the student network, and got meaningful results in knowledge transfer and transfer learning tasks. Huang & Wang (2017) also tried to match the feature of the student network and teacher network devising a loss term MMD (Maximum Mean Discrepancy). Yim et al. (2017) introduced another knowledge transfer technique for faster optimization and applied it also for transfer learning.
Differently from previous works, Abdulnabi et al. (2018) and Tang et al. (2016) tried to transfer information in RNNs. Sharing knowledges were applied to even NAS (Neural Architecture Search) (Zoph & Le, 2016), making ENAS (Efficient NAS) (Pham et al., 2018) to overcome the heavy time cost. Tarvainen & Valpola (2017) used Mean Teachers to train a student network by a semi-supervised manner, and Radosavovic et al. (2017) expanded the semi-supervised learning to omni-supervised learning by proposing Data Distillation, a more applicable version of knowledge distillation. Recently, Transparent Model Distillation (Tan et al., 2018) tried to investigate the model distillation for transparency.
3 FACTOR TRANSFER
FT (Kim et al., 2018) is one of knowledge transfer methods that uses a pair of paraphraser and translator as a mediator of the teacher and the student networks. The paraphraser and the translator are attached to the last convolution layer of the teacher and the student networks, respectively and the translator is trained in a way that its output assimilates the output of the paraphraser.
3.1 TEACHER FACTOR EXTRACTION WITH PARAPHRASER
The paraphaser is trained in an unsupervised manner to map the teacher’s knowledge into another form so that the student network understands the knowledge more easily. It is designed with several convolution layers and transposed-convolution layers, in a similar way to autoencoders. The teacher’s paraphrased knowledge, called teacher factor (FT ), is output of the last convolution layer in the paraphraser. The paraphraser is trained with simple reconstruction loss function,
Lrec = ‖x− P (x)‖2, (1)
where the paraphraser P (·) takes the featuremap x as an input. The convolution and the transposedconvolution layers in the prarphraser are designed to make the spatial dimension of the teacher factor the same as that of the input, but the depth (number of channels) of the teacher factor can be made different with that of the input with the a rate of k.
3.2 FACTOR TRANSFER WITH TRANSLATOR
The translator is designed with several convolution layers, whose output is called student factor (FS). The student network and the translator are trained simultaneously with the the combined loss consisting of the conventional cross entropy loss Lcls and the factor transfer loss LFT as follows:
Lstudent = Lcls + βLFT , (2) LFT = ∥∥∥∥ FT‖FT ‖2 − FS‖FS‖2 ∥∥∥∥p p , (3)
where ‖A‖p is the p-norm of the vectorized version of a tensor A. We train the translator to output FS that mimics FT . The training is done in an end-to-end manner so that the gradients from the translator also affect the student network. The translator is used only in the training phase, which means that the factor transfer method does not affect the computation at test time.
4 PROPOSED TRAINING ALGORITHMS
In deep CNNs, due mainly to the curse of dimensionality, the data points that lie on the data space are very sparse, and this phenomena can be easily detected by applying algorithms like knearest neighbors. Necessarily, decision boundaries that determine the borders dividing classes are multitudinous, because finding boundaries that fit well to a training dataset is relatively an easy task. Even if the networks with the same architecture are trained, the learned decision boundaries cannot be the same. This is why ensemble methods usually perform better than a single model despite of their structural equality. Goodfellow et al. (2016) also state that different models will not make all the same errors on the test dataset.
Consider the conditions that determine the training procedure of CNNs. They include the structure of the CNN and the choice of an optimizer, random initialization, the sequence of mini-batch, and the types of data augmentations. If you make the same conditions for two different CNNs, their training procedure will be identical. However, we usually determine only the structure of the CNN, usually keeping the others to be random. Consequently, two networks with the same structure will definitely not learn the same decision boundaries.
Additionally, Kim et al. (2018) stated that they resolve the ‘inherent difference’ between two networks. Among the inherent differences, minimizing the differences in the structure of CNNs can help better
learn the knowledge of the teacher network. This motivation provides us chances to propose several modified versions of existing methods. In this section, we explain the two feature-level ensemble training algorithms that we use for boosting the performance of a student network without introducing any additional calculations at test time. The proposed methods are collectively called as FEED which is an abbreviation for the Feature-level Ensemble Effect for knowledge Distillation.
4.1 SEQUENTIAL FEED
Taking the advantage that the structure of the teacher network and the student network are the same, we propose a learning method that can accumulate and assemble knowledge by performing knowledge transfer several times. We name this algorithm as sFEED (sequential FEED), and the training procedure is illustrated in Figure 1. The paraphraser is omitted for sFEED, because the inherent difference which FT mentioned on their paper is extenuated by choosing the same type of teacher network and the student network. Additionally, omitting the paraphraser simplifies the training procedure. Omitting it not only eliminates the ambiguity of the choices of hyper-parameters, but also reduces the three-stage training procedure of FT into two stages.
Since the teacher and the student networks share the same architecture, problems due from different types of architectures being used for the teacher and the student do not occur here. A similar approach has been introduced in the paper of Furlanello et al. (2018), using output predictions. If the student network is trained standalone, it would perform similar to the teacher network. But from the view of knowledge ensemble, since the teacher network delivers feature-level knowledge different from that of the student network, the student network will benefit from it.
4.2 PARALLEL FEED
Assuming that giving knowledge in feature map level and giving knowledge in ensemble way both have their advantages, we wanted to make cooperation of these two kind of methods. Tackling this problem, we propose a training structure named pFEED (parallel FEED) to transfer the ensemble knowledge at the feature map, and is our second main proposed method. In this way, we take advantage of both the label-based method of delivering the information of the ensemble model and the feature-based method of delivering more specific information than the label-based method. The proposed algorithm is compared with KD which distills the knowledge of teacher network with ensemble label. Our algorithm is shown in Figure2.(a), and the training method of KD is shown in Figure2.(b). Consistent with what sFEED did, we omitted the paraphraser also in pFEED. Unlike the original FT, we use multiple teachers with multiple translators. If we use k different teachers, the loss term will be like following.
Lstudent = Lcls + β k∑
n=1
LFTn , (4)
LFTn = ∥∥∥∥ xn‖xn‖2 − FS‖FS‖2 ∥∥∥∥ 1 . (5)
LFTn is the FT loss from nth teacher network, and xn is the output feature map obtained from nth teacher network.
5 EXPERIMENTS
In this section, we first show the classification results on CIFAR-100 (Krizhevsky et al.). Second, we explore the feasibility of our algorithm on Imagenet (Russakovsky et al., 2015), a commonly used large dataset. For both datasets, we apply our algorithm and compare the results with KD, and analyze the results quantitatively. Third, we compare our results the state-of-the-art model on CIFAR-100. In the remaining section, we show some analysis on our algorithm and explain the experimental details.
We chose 3 types of CNNs to check the applicability of our algorithms on CIFAR-100: ResNet (He et al., 2016), Wide ResNet (Zagoruyko & Komodakis, 2016b), and RexNext (Xie et al., 2017). For ResNets, we chose ResNet-56 and ResNet-110 which have fewer number of parameters compared to recent CNNs, and WRN28-10 is a model that controls the widen factor, with much more number of parameters. WRN28-10 model achieves the best classification accuracy on CIFAR-100 among the WRNs reported on their parer. The ResNext29-16x64d also achieves the best classification accuracy on CIFAR-100 in their parer, and this type of CNNs controls the cardinality of CNNs and it has much more parameters compared to other models. For ImageNet, we used ResNet-34 to confirm the feasibility on large scale datasets.
5.1 SEQUENTIAL FEED
The classification results of our algorithm on CIFAR-100 can be found on Table 1. The word ‘Stack’ on Table 1 is the number of recursions that the student model is trained. Table 2 shows the results using the paraphraser. Although the existence of the paraphraser seems to affect the performance, omitting it seemed better for stronger networks. Additionally, as mentioned earlier, omitting the paraphraser simplifies the training procedure.
In many cases, the classification accuracy improves as the number of stacks increases. Though they have some fluctuations, it might be possible to achieve higher accuracy. We only experimented up to 5 times since all of them achieves fairly good enough accuracy compared to baseline models. Note that even though we train translators on the student side, they are not counted in Params column because it is not used at the test phase.
The results of sFEED for ImageNet is on Table 3. For the base model, we simply used the pre-trained model that Pytorch supplies, and could achieve a desired result that the performance of Top-1 and Top-5 accuracy improves at each stack.
5.2 PARALLEL FEED
In experiments on pFEED, we used the same type of networks that were used on previous experiments. For all 4 types of CNNs, we compared the classification results with the result of KD because we designed our training algorithm with intention of receiving more ensemble-like knowledge from multiple teachers. The difference between pFEED and KD is that KD ensembles the knowledge on the output level, and our proposed method try to draw the effect of ensemble on feature-map level, for delivering more specific information. The results are on Table 4.
The ‘Scratch’ column shows the performance of the base networks, used in KD for model ensemble, and also used as teachers in pFEED. For all experiments, pFEED consistently got higher accuracy compared with KD. It is worth noting that the performance of KD is almost equivalent to pFEED for small networks, but as it comes to the networks with larger number of parameters, pFEED shows better accuracy compared to KD. This result matches the hypothesis that out delivering feature-map-level information will provide more specific information to the student.
The results of pFEED for ImageNet is on Table 5. We also could find some accuracy improvements on ImageNet dataset, but did not have enough resources to train models with larger parameters, and used only three teachers. We could get decent results, but improvements are not strong as those of sFEED on ImageNet.
5.3 QUALITATIVE ANALYSIS
Reconstruction Loss: A paraphraser in the FT can be interpreted as a convolutional autoencoder in that it uses convolution and transposed convolution layers with a reconstruction loss, then the factor can be interpreted as a latent vector z. Supposing that the reason for the accuracy gains shown in the previous tables is that the student learns the ensemble knowledge, student network is forced to learn information with high complexity. Let us denote the input of paraphraser as x. The increase in the complexity of feature representation is equivalent to the increase in the complexity of x. In
FEED training, since the number of parameters in the paraphraser is fixed, the size of z also should be fixed. Consequently with complexity of x increasing, p(x|z) decreases, resulting in the increase of the reconstruction loss.
For both of our proposed training methods, we recorded the average training reconstruction losses of the paraphrasers normalized by the size of the paraphraser and plotted the curve on Figure 3 and Figure 4 (though we do not actually use paraphrasers for FEED training). In Fig. 3, Ph1 through Ph4 are the paraphrasers trained based on the student networks of Stack1 through Stack4 in Table 1, and the paraphrasers in Figure 4 is trained based on one of the scratch teacher networks and following student network of pFEED in Table 4. As expected, as the knowledge is transferred, the reconstruction loss becomes larger which indicates that the student network learns more difficult knowledge and thus the classifier accuracy increases. This trend surprisingly matches the results on the tables. In Figure 3, the legends matches the trends of ResNet-56 row in Table 1 of sFEED (especially, Stack 2 and 3 have similar errors and likewise, Ph2 and Ph3 are similar), and the big legend in Figure 4 also follows the high performance increase in WRN28-10 row of Table 4.
5.4 IMPLEMENTATION DETAILS
CIFAR-100: In the student network training phase, we used l1(p = 1) loss and the hyper-parameter β in eq.2 was set to 500 for ResNets and 2,000 for WideResNet and ResNext. We tried to set the training procedure to be the same as that of the original paper. For ResNets, we set the initial learning rate to 0.1 and decayed the learning rate with rate of 0.1 at 80, 120 epochs, and training finished at 160 epochs. For WideResNets, we set the initial learning rate to 0.1 and decayed the learning rate with rate of 0.2 at 60, 120, 160 epochs, and ended the training at 200 epochs. For ResNexts, we set the initial learning rate to 0.1 and decayed the learning rate with rate of 0.1 at 150, 225 epochs, and training finished at 300 epochs. For all the experiments, simple SGD is used as an optimizer, with momentum of 0.9 and weight decay of 5× 10−4, and mini-batch size of 128. The ResNets and WideResNets were trained on single Titap XP and the ResNexts were trained on four 1080 ti GPUs. The same setting was applied for translators with 3 convolution layers, since the translators were trained jointly with the student network.
Paraphraser: All the paraphrasers used in Table 2 are trained for 10 epochs, with learning rate of 0.1. The original paper states that the paraphrasers are trained for 30 epochs, but training it over 10 epochs was unnecessary. All paraphrasers has 3 convolution layers, and since the teacher network and student networks are the same kind, we set paraphrase rate k to be 1 for simple implementation. According to the paper of FT, the choice of it did not seem to affect the performance much.
ImageNet: The hyper-parameter β was set to 1,000, and following the training schedule of the Pytorch framework, train starts with learning rate of 0.1 and decays by the factor of 0.1 at 30, 60 epochs and finishes at 90 epoch, with mini-batch size of 256. All other conditions are set to be the same as the setting of CIFAR-100.
Setting of β: For the ease of re-producing, the choice of β is important. Supposing that we use LFT with l1(p = 1), the the loss scale of LFT depends on the number of nodes. Empirically, if one of the scales of either Lcls or LFT is dominant, the accuracy diminishes compared to the even case, so we approximately adjusted the scale of LFT , resulting in different βs in different networks. Hyperparameters of KD: We set the Temperature T for softened softmax to 4 as in KD (Hinton et al., 2015)
5.5 COMPARISON WITH KNOWLEDGE TRANSFER METHODS
The comparison with recent state-of-the-art knowledge distillation methods are on Table 6. The scores on ONE methods are from the paper of Lan et al. (2018), and the scores on the BAN methods are from the paper of Furlanello et al. (2018). Both methods use final predictions to transfer knowledge, while our method uses output feature maps. Unfortunately, we could not prepare all variations of networks and hyper-parameters for fair comparison, but could achieve decent performances.
6 CONCLUSION
In this work, we proposed a couple of new network training algorithms referred to as Feature-level Ensemble Effect for knowledge Distillation (FEED). With FEED, we can improve the performance of a network by trying to inject ensemble knowledge to the student network. The first one, sequential FEED recursively trains the student network and incrementally improves performance. The second one, parallel FEED trains the student network using multiple teachers simultaneously. The qualitative analysis with reconstruction loss gives hints about the cause of accuracy gains. The main drawback is the training times needed for multiple teachers which is an inherent characteristics of any ensemble methods, and pFEED causes bottleneck by feeding inputs to multiple teachers. Consequently, applying it more efficiently will be our future work, together with application to other domains. | 1. What is the novelty and significance of the proposed approach in comparison to previous works, specifically in distillation?
2. What are the reviewer's concerns regarding the methodology, such as the use of both translator and paraphraser, and the choice of distance metric?
3. How does the reviewer assess the impact and advantages of the proposed approach over traditional distillation methods?
4. Are there any questions or concerns regarding the implementation and technical aspects of the method, such as the necessity of the translator being non-linear?
5. How does the reviewer evaluate the clarity and quality of the paper's content, including the explanation of the method and its purpose? | Review | Review
I do not necessarily see something wrong with the paper, but I'm not convinced of the significance (or sufficient novelty) of the approach.
The way I understand it, a translator is added on top of the top layer of the student, which is nothing but a few conv layers that project the output to potentially the size of the teacher (by the way, why do you need both a paraphraser and translator, rather than making the translator always project to the size of the teacher which basically will do the same thing !? )
And then a distance is minimized between the translated value of the students and the teacher output layer. The distance is somewhat similar to L2 (though the norm is removed from the features -- which probably helps with learning in terms of gradient norm).
Comparing with normal distillation I'm not sure how significant the improvement is. And technically this is just a distance metric between the output of the student and teacher. Sure it is a more involved distance metric, however it is in the spirit of what the distillation work is all about and I do not see this as being fundamentally different, or at least not different enough for an ICLR paper.
Some of the choices seem arbitrary to me (e.g. using both translator and paraphraser). Does the translator need to be non-linear? Could it be linear? What is this mapping doing (e.g. when teacher and student have the same size) ? Is it just finding a rotation of the features? Is it doing something fundamentally more interesting?
Why this particular distance metric between the translated features? Why not just L2?
In the end I'm not sure the work as is, is ready for ICLR. |
ICLR | Title
FEED: Feature-level Ensemble Effect for knowledge Distillation
Abstract
This paper proposes a versatile and powerful training algorithm named Featurelevel Ensemble Effect for knowledge Distillation (FEED), which is inspired by the work of factor transfer. The factor transfer is one of the knowledge transfer methods that improves the performance of a student network with a strong teacher network. It transfers the knowledge of a teacher in the feature map level using high-capacity teacher network, and our training algorithm FEED is an extension of it. FEED aims to transfer ensemble knowledge, using either multiple teacher in parallel or multiple training sequences. Adapting peer-teaching framework, we introduce a couple of training algorithms that transfer ensemble knowledge to the student at the feature map level, both of which help the student network find more generalized solutions in the parameter space. Experimental results on CIFAR-100 and ImageNet show that our method, FEED, has clear performance enhancements, without introducing any additional parameters or computations at test time.
1 INTRODUCTION
Recent successes of CNNs have led to the use of deep learning in real-world applications. In order to manipulate these deep learning models, people are asking deep CNNs to use multi-class datasets to find manifolds separating different classes. To meet this need, deep and parameter-rich networks have emerged that have the power to find manifolds for large numbers of classes. However, these deep CNNs suffer from the problem of overfitting due to their great depth and complexity, which results in the drop of performance at the test time. In fact, even a small ResNet applied for a dataset such as CIFAR-100 (Krizhevsky et al.) will not have room to learn more because the train losses converge, whereas the test accuracy is significantly lower. This phenomena have led to the need of learning DNN models with appropriate regularization to allow them to generalize better. In fact, regularizing a model to achieve high performance for new inputs is a technique that has been used since the era of early machine learning.
Among them, model ensemble (Dietterich, 2000) is one of the popular regularization methods (Goodfellow et al., 2016), which has been used as a way of alleviating the problem of overfitting in a single model. But it has drawbacks in that it requires multiple models and inputs should be fed to each of them at test time. For a solution of this problem, Hinton et al. (2015) proposed Knowledge Distillation (KD) which trains a student network using soft labels from ensemble models or a high-capacity model. They obtained meaningful results in speech recognition dataset and this work brought advent of knowledge transfer, an area in the representation learning that aims performance improvements by training a weak student network by giving various forms of knowledge of expert teacher networks (Huang & Wang, 2017). It is also categorized as one family of model compression (Kim et al., 2018), since it helps the student network achieve higher accuracy with fixed number of parameters given.
The recent knowledge transfer algorithms can be approximately categorized in two ways. The first way is whether to use an ensemble model as a teacher or a single high-capacity model as a teacher. The second is whether to transfer the teacher’s prediction or the information from feature maps. Whereas the methods that use teacher’s prediction can use both types of teachers, to the best of our knowledge, methods using feature-map-level information only use a single high-capacity model as a teacher. For example, studies of Factor Transfer (FT) (Kim et al., 2018), Attention Transfer (AT) (Zagoruyko & Komodakis, 2016a), and Neuron Selectivity Transfer (NST) (Huang & Wang, 2017) set
the student network as a shallow network with a small-sized parameters, and set the teacher network as a deeper and more powerful network instead of an ensemble expert. One of the drawbacks of methods with a high-capacity teacher model is that high-capacity model may be hard to obtain (Lan et al., 2018).
On the contrary, the ones which use an ensemble of networks can transfer ensemble knowledge and also have an advantage of peer-teaching framework (Hinton et al., 2015; Lan et al., 2018; Furlanello et al., 2018). Also, Zhang et al. (2017) showed that using one same type of network for transferring output-level knowledge can improve performance of a network. However, the methods that delivers knowledge at the feature-map level also has advantages that they can give more specific information to the student compared to the methods that only rely on the output predictions of the teacher.
To make full advantages of both ensemble teacher method and feature map methods, we came up with a new framework that delivers knowledge of multiple networks at the feature-map level. In this paper, we train a new student network using the same type of teacher networks using the scheme of FT, a decent knowledge transfer algorithm that transfers knowledge using auto-encoding factors. We utilized the translator of FT to transfer ensemble knowledge, introducing two different kinds of ensemble-effective training algorithms:
• We recursively set the trained student network as a new teacher network which helps training a new student, accumulating knowledge in ensemble at the feature map level.
• We transfer knowledge to the student from multiple teachers simultaneously, expecting the student network to learn ensemble knowledge at the feature map level.
The paper is organized as follows. First, we briefly explain the related works including the method FT. Then a couple of more advanced versions of FT are proposed. Next, we verify our proposed methods with experiments. Experimental results from our proposed training methods are compared with the case of KD on CIFAR-100 and ImageNet datasets. Finally, we compare our method with other kinds of recent knowledge transfer algorithms.
2 RELATED WORKS
Many researchers studied the ways to train models other than using a purely supervised loss. In early times of these studies, Model Compression (Bucila et al., 2006) studied the ways to compress information from ensemble models in one network. More recently, Hinton et al. (2015) proposed Knowledge Distillation (KD), which uses softened softmax labels from teacher networks when training the student network, and motivated many researchers to develop many variants of it to various domains. The paper has been applied to transfer learning, and also led the advent of knowledge transfer with applications to many domains.
FitNet (Romero et al., 2014) applied KD to train a student network with different capacity to find better trade off between its accuracy and time cost. Attention Transfer (Zagoruyko & Komodakis, 2016a) tried to transfer the attention map of the teacher network to the student network, and got meaningful results in knowledge transfer and transfer learning tasks. Huang & Wang (2017) also tried to match the feature of the student network and teacher network devising a loss term MMD (Maximum Mean Discrepancy). Yim et al. (2017) introduced another knowledge transfer technique for faster optimization and applied it also for transfer learning.
Differently from previous works, Abdulnabi et al. (2018) and Tang et al. (2016) tried to transfer information in RNNs. Sharing knowledges were applied to even NAS (Neural Architecture Search) (Zoph & Le, 2016), making ENAS (Efficient NAS) (Pham et al., 2018) to overcome the heavy time cost. Tarvainen & Valpola (2017) used Mean Teachers to train a student network by a semi-supervised manner, and Radosavovic et al. (2017) expanded the semi-supervised learning to omni-supervised learning by proposing Data Distillation, a more applicable version of knowledge distillation. Recently, Transparent Model Distillation (Tan et al., 2018) tried to investigate the model distillation for transparency.
3 FACTOR TRANSFER
FT (Kim et al., 2018) is one of knowledge transfer methods that uses a pair of paraphraser and translator as a mediator of the teacher and the student networks. The paraphraser and the translator are attached to the last convolution layer of the teacher and the student networks, respectively and the translator is trained in a way that its output assimilates the output of the paraphraser.
3.1 TEACHER FACTOR EXTRACTION WITH PARAPHRASER
The paraphaser is trained in an unsupervised manner to map the teacher’s knowledge into another form so that the student network understands the knowledge more easily. It is designed with several convolution layers and transposed-convolution layers, in a similar way to autoencoders. The teacher’s paraphrased knowledge, called teacher factor (FT ), is output of the last convolution layer in the paraphraser. The paraphraser is trained with simple reconstruction loss function,
Lrec = ‖x− P (x)‖2, (1)
where the paraphraser P (·) takes the featuremap x as an input. The convolution and the transposedconvolution layers in the prarphraser are designed to make the spatial dimension of the teacher factor the same as that of the input, but the depth (number of channels) of the teacher factor can be made different with that of the input with the a rate of k.
3.2 FACTOR TRANSFER WITH TRANSLATOR
The translator is designed with several convolution layers, whose output is called student factor (FS). The student network and the translator are trained simultaneously with the the combined loss consisting of the conventional cross entropy loss Lcls and the factor transfer loss LFT as follows:
Lstudent = Lcls + βLFT , (2) LFT = ∥∥∥∥ FT‖FT ‖2 − FS‖FS‖2 ∥∥∥∥p p , (3)
where ‖A‖p is the p-norm of the vectorized version of a tensor A. We train the translator to output FS that mimics FT . The training is done in an end-to-end manner so that the gradients from the translator also affect the student network. The translator is used only in the training phase, which means that the factor transfer method does not affect the computation at test time.
4 PROPOSED TRAINING ALGORITHMS
In deep CNNs, due mainly to the curse of dimensionality, the data points that lie on the data space are very sparse, and this phenomena can be easily detected by applying algorithms like knearest neighbors. Necessarily, decision boundaries that determine the borders dividing classes are multitudinous, because finding boundaries that fit well to a training dataset is relatively an easy task. Even if the networks with the same architecture are trained, the learned decision boundaries cannot be the same. This is why ensemble methods usually perform better than a single model despite of their structural equality. Goodfellow et al. (2016) also state that different models will not make all the same errors on the test dataset.
Consider the conditions that determine the training procedure of CNNs. They include the structure of the CNN and the choice of an optimizer, random initialization, the sequence of mini-batch, and the types of data augmentations. If you make the same conditions for two different CNNs, their training procedure will be identical. However, we usually determine only the structure of the CNN, usually keeping the others to be random. Consequently, two networks with the same structure will definitely not learn the same decision boundaries.
Additionally, Kim et al. (2018) stated that they resolve the ‘inherent difference’ between two networks. Among the inherent differences, minimizing the differences in the structure of CNNs can help better
learn the knowledge of the teacher network. This motivation provides us chances to propose several modified versions of existing methods. In this section, we explain the two feature-level ensemble training algorithms that we use for boosting the performance of a student network without introducing any additional calculations at test time. The proposed methods are collectively called as FEED which is an abbreviation for the Feature-level Ensemble Effect for knowledge Distillation.
4.1 SEQUENTIAL FEED
Taking the advantage that the structure of the teacher network and the student network are the same, we propose a learning method that can accumulate and assemble knowledge by performing knowledge transfer several times. We name this algorithm as sFEED (sequential FEED), and the training procedure is illustrated in Figure 1. The paraphraser is omitted for sFEED, because the inherent difference which FT mentioned on their paper is extenuated by choosing the same type of teacher network and the student network. Additionally, omitting the paraphraser simplifies the training procedure. Omitting it not only eliminates the ambiguity of the choices of hyper-parameters, but also reduces the three-stage training procedure of FT into two stages.
Since the teacher and the student networks share the same architecture, problems due from different types of architectures being used for the teacher and the student do not occur here. A similar approach has been introduced in the paper of Furlanello et al. (2018), using output predictions. If the student network is trained standalone, it would perform similar to the teacher network. But from the view of knowledge ensemble, since the teacher network delivers feature-level knowledge different from that of the student network, the student network will benefit from it.
4.2 PARALLEL FEED
Assuming that giving knowledge in feature map level and giving knowledge in ensemble way both have their advantages, we wanted to make cooperation of these two kind of methods. Tackling this problem, we propose a training structure named pFEED (parallel FEED) to transfer the ensemble knowledge at the feature map, and is our second main proposed method. In this way, we take advantage of both the label-based method of delivering the information of the ensemble model and the feature-based method of delivering more specific information than the label-based method. The proposed algorithm is compared with KD which distills the knowledge of teacher network with ensemble label. Our algorithm is shown in Figure2.(a), and the training method of KD is shown in Figure2.(b). Consistent with what sFEED did, we omitted the paraphraser also in pFEED. Unlike the original FT, we use multiple teachers with multiple translators. If we use k different teachers, the loss term will be like following.
Lstudent = Lcls + β k∑
n=1
LFTn , (4)
LFTn = ∥∥∥∥ xn‖xn‖2 − FS‖FS‖2 ∥∥∥∥ 1 . (5)
LFTn is the FT loss from nth teacher network, and xn is the output feature map obtained from nth teacher network.
5 EXPERIMENTS
In this section, we first show the classification results on CIFAR-100 (Krizhevsky et al.). Second, we explore the feasibility of our algorithm on Imagenet (Russakovsky et al., 2015), a commonly used large dataset. For both datasets, we apply our algorithm and compare the results with KD, and analyze the results quantitatively. Third, we compare our results the state-of-the-art model on CIFAR-100. In the remaining section, we show some analysis on our algorithm and explain the experimental details.
We chose 3 types of CNNs to check the applicability of our algorithms on CIFAR-100: ResNet (He et al., 2016), Wide ResNet (Zagoruyko & Komodakis, 2016b), and RexNext (Xie et al., 2017). For ResNets, we chose ResNet-56 and ResNet-110 which have fewer number of parameters compared to recent CNNs, and WRN28-10 is a model that controls the widen factor, with much more number of parameters. WRN28-10 model achieves the best classification accuracy on CIFAR-100 among the WRNs reported on their parer. The ResNext29-16x64d also achieves the best classification accuracy on CIFAR-100 in their parer, and this type of CNNs controls the cardinality of CNNs and it has much more parameters compared to other models. For ImageNet, we used ResNet-34 to confirm the feasibility on large scale datasets.
5.1 SEQUENTIAL FEED
The classification results of our algorithm on CIFAR-100 can be found on Table 1. The word ‘Stack’ on Table 1 is the number of recursions that the student model is trained. Table 2 shows the results using the paraphraser. Although the existence of the paraphraser seems to affect the performance, omitting it seemed better for stronger networks. Additionally, as mentioned earlier, omitting the paraphraser simplifies the training procedure.
In many cases, the classification accuracy improves as the number of stacks increases. Though they have some fluctuations, it might be possible to achieve higher accuracy. We only experimented up to 5 times since all of them achieves fairly good enough accuracy compared to baseline models. Note that even though we train translators on the student side, they are not counted in Params column because it is not used at the test phase.
The results of sFEED for ImageNet is on Table 3. For the base model, we simply used the pre-trained model that Pytorch supplies, and could achieve a desired result that the performance of Top-1 and Top-5 accuracy improves at each stack.
5.2 PARALLEL FEED
In experiments on pFEED, we used the same type of networks that were used on previous experiments. For all 4 types of CNNs, we compared the classification results with the result of KD because we designed our training algorithm with intention of receiving more ensemble-like knowledge from multiple teachers. The difference between pFEED and KD is that KD ensembles the knowledge on the output level, and our proposed method try to draw the effect of ensemble on feature-map level, for delivering more specific information. The results are on Table 4.
The ‘Scratch’ column shows the performance of the base networks, used in KD for model ensemble, and also used as teachers in pFEED. For all experiments, pFEED consistently got higher accuracy compared with KD. It is worth noting that the performance of KD is almost equivalent to pFEED for small networks, but as it comes to the networks with larger number of parameters, pFEED shows better accuracy compared to KD. This result matches the hypothesis that out delivering feature-map-level information will provide more specific information to the student.
The results of pFEED for ImageNet is on Table 5. We also could find some accuracy improvements on ImageNet dataset, but did not have enough resources to train models with larger parameters, and used only three teachers. We could get decent results, but improvements are not strong as those of sFEED on ImageNet.
5.3 QUALITATIVE ANALYSIS
Reconstruction Loss: A paraphraser in the FT can be interpreted as a convolutional autoencoder in that it uses convolution and transposed convolution layers with a reconstruction loss, then the factor can be interpreted as a latent vector z. Supposing that the reason for the accuracy gains shown in the previous tables is that the student learns the ensemble knowledge, student network is forced to learn information with high complexity. Let us denote the input of paraphraser as x. The increase in the complexity of feature representation is equivalent to the increase in the complexity of x. In
FEED training, since the number of parameters in the paraphraser is fixed, the size of z also should be fixed. Consequently with complexity of x increasing, p(x|z) decreases, resulting in the increase of the reconstruction loss.
For both of our proposed training methods, we recorded the average training reconstruction losses of the paraphrasers normalized by the size of the paraphraser and plotted the curve on Figure 3 and Figure 4 (though we do not actually use paraphrasers for FEED training). In Fig. 3, Ph1 through Ph4 are the paraphrasers trained based on the student networks of Stack1 through Stack4 in Table 1, and the paraphrasers in Figure 4 is trained based on one of the scratch teacher networks and following student network of pFEED in Table 4. As expected, as the knowledge is transferred, the reconstruction loss becomes larger which indicates that the student network learns more difficult knowledge and thus the classifier accuracy increases. This trend surprisingly matches the results on the tables. In Figure 3, the legends matches the trends of ResNet-56 row in Table 1 of sFEED (especially, Stack 2 and 3 have similar errors and likewise, Ph2 and Ph3 are similar), and the big legend in Figure 4 also follows the high performance increase in WRN28-10 row of Table 4.
5.4 IMPLEMENTATION DETAILS
CIFAR-100: In the student network training phase, we used l1(p = 1) loss and the hyper-parameter β in eq.2 was set to 500 for ResNets and 2,000 for WideResNet and ResNext. We tried to set the training procedure to be the same as that of the original paper. For ResNets, we set the initial learning rate to 0.1 and decayed the learning rate with rate of 0.1 at 80, 120 epochs, and training finished at 160 epochs. For WideResNets, we set the initial learning rate to 0.1 and decayed the learning rate with rate of 0.2 at 60, 120, 160 epochs, and ended the training at 200 epochs. For ResNexts, we set the initial learning rate to 0.1 and decayed the learning rate with rate of 0.1 at 150, 225 epochs, and training finished at 300 epochs. For all the experiments, simple SGD is used as an optimizer, with momentum of 0.9 and weight decay of 5× 10−4, and mini-batch size of 128. The ResNets and WideResNets were trained on single Titap XP and the ResNexts were trained on four 1080 ti GPUs. The same setting was applied for translators with 3 convolution layers, since the translators were trained jointly with the student network.
Paraphraser: All the paraphrasers used in Table 2 are trained for 10 epochs, with learning rate of 0.1. The original paper states that the paraphrasers are trained for 30 epochs, but training it over 10 epochs was unnecessary. All paraphrasers has 3 convolution layers, and since the teacher network and student networks are the same kind, we set paraphrase rate k to be 1 for simple implementation. According to the paper of FT, the choice of it did not seem to affect the performance much.
ImageNet: The hyper-parameter β was set to 1,000, and following the training schedule of the Pytorch framework, train starts with learning rate of 0.1 and decays by the factor of 0.1 at 30, 60 epochs and finishes at 90 epoch, with mini-batch size of 256. All other conditions are set to be the same as the setting of CIFAR-100.
Setting of β: For the ease of re-producing, the choice of β is important. Supposing that we use LFT with l1(p = 1), the the loss scale of LFT depends on the number of nodes. Empirically, if one of the scales of either Lcls or LFT is dominant, the accuracy diminishes compared to the even case, so we approximately adjusted the scale of LFT , resulting in different βs in different networks. Hyperparameters of KD: We set the Temperature T for softened softmax to 4 as in KD (Hinton et al., 2015)
5.5 COMPARISON WITH KNOWLEDGE TRANSFER METHODS
The comparison with recent state-of-the-art knowledge distillation methods are on Table 6. The scores on ONE methods are from the paper of Lan et al. (2018), and the scores on the BAN methods are from the paper of Furlanello et al. (2018). Both methods use final predictions to transfer knowledge, while our method uses output feature maps. Unfortunately, we could not prepare all variations of networks and hyper-parameters for fair comparison, but could achieve decent performances.
6 CONCLUSION
In this work, we proposed a couple of new network training algorithms referred to as Feature-level Ensemble Effect for knowledge Distillation (FEED). With FEED, we can improve the performance of a network by trying to inject ensemble knowledge to the student network. The first one, sequential FEED recursively trains the student network and incrementally improves performance. The second one, parallel FEED trains the student network using multiple teachers simultaneously. The qualitative analysis with reconstruction loss gives hints about the cause of accuracy gains. The main drawback is the training times needed for multiple teachers which is an inherent characteristics of any ensemble methods, and pFEED causes bottleneck by feeding inputs to multiple teachers. Consequently, applying it more efficiently will be our future work, together with application to other domains. | 1. What is the main idea behind the paper, and what are the novel aspects of the proposed method?
2. What are the strengths and weaknesses of the paper regarding its clarity, organization, and referencing?
3. How does the reviewer assess the numerical results presented in the paper?
4. Are there any typos or errors in the paper that need to be addressed?
5. What are the suggestions for improving the paper, particularly in terms of comparing the proposed method with other relevant works? | Review | Review
In summary, I think this paper contains some reasonable results based on a reasonable, moderately novel, idea, but unfortunately, it is not yet ready for publication. Reading it made me rather confused.
Good things:
- The main idea is sensible, though distilling into the same architecture (sFEED) is not that novel. I think the pFEED is probably the more novel part.
- The numerical results are quite good.
- It's a fairly simple method. If others reproduced these results, I think it would be useful.
Problems:
- Some parts of the paper are written in a way that makes the reader confused about what this paper is about. For example the first paragraph. Some motivations I just did not understand.
- Some parts of the paper are repeating itself. For example "introduction" and "related works". The section on related work also includes some quite unrelated papers.
- The references in the paper are often pointing to work that came much later than the original idea or some pretty random recent papers. For example the idea of model compression (or knowledge distillation) is much older than Hinton et al. I believe it was first proposed by Bucila et al. [1] (which the authors mention later as if knowledge distillation and model compression were very different ideas), it definitely doesn't come from Kim et al. (2018). Learning from intermediate representations of the network is at least as old as Romero et al. [2]. Compression into a network of the same architecture is definitely older than Furnarello et al. (2018). It was done, for example, by Geras et al. [3]. The paper also cites Goodfellow et al. (2016) in some pretty random contexts. I don't want to be too petty about references, but unfortunately, this paper is just below a threshold that I would still find acceptable in this respect.
- The comparison in Table 6 would make more sense if the same architectures would be clearly compared. As it is, it is difficult to be certain where the improvement is coming from and how it actually compares to different methods.
Typos: Titap X, ResNext, prarphraser.
References:
[1] Bucila et al. Model Compression. 2006.
[2] Romero et al. FitNets: Hints for Thin Deep Nets. 2014.
[3] Geras et al. Blending LSTMs into CNNs. 2016. |
ICLR | Title
FEED: Feature-level Ensemble Effect for knowledge Distillation
Abstract
This paper proposes a versatile and powerful training algorithm named Featurelevel Ensemble Effect for knowledge Distillation (FEED), which is inspired by the work of factor transfer. The factor transfer is one of the knowledge transfer methods that improves the performance of a student network with a strong teacher network. It transfers the knowledge of a teacher in the feature map level using high-capacity teacher network, and our training algorithm FEED is an extension of it. FEED aims to transfer ensemble knowledge, using either multiple teacher in parallel or multiple training sequences. Adapting peer-teaching framework, we introduce a couple of training algorithms that transfer ensemble knowledge to the student at the feature map level, both of which help the student network find more generalized solutions in the parameter space. Experimental results on CIFAR-100 and ImageNet show that our method, FEED, has clear performance enhancements, without introducing any additional parameters or computations at test time.
1 INTRODUCTION
Recent successes of CNNs have led to the use of deep learning in real-world applications. In order to manipulate these deep learning models, people are asking deep CNNs to use multi-class datasets to find manifolds separating different classes. To meet this need, deep and parameter-rich networks have emerged that have the power to find manifolds for large numbers of classes. However, these deep CNNs suffer from the problem of overfitting due to their great depth and complexity, which results in the drop of performance at the test time. In fact, even a small ResNet applied for a dataset such as CIFAR-100 (Krizhevsky et al.) will not have room to learn more because the train losses converge, whereas the test accuracy is significantly lower. This phenomena have led to the need of learning DNN models with appropriate regularization to allow them to generalize better. In fact, regularizing a model to achieve high performance for new inputs is a technique that has been used since the era of early machine learning.
Among them, model ensemble (Dietterich, 2000) is one of the popular regularization methods (Goodfellow et al., 2016), which has been used as a way of alleviating the problem of overfitting in a single model. But it has drawbacks in that it requires multiple models and inputs should be fed to each of them at test time. For a solution of this problem, Hinton et al. (2015) proposed Knowledge Distillation (KD) which trains a student network using soft labels from ensemble models or a high-capacity model. They obtained meaningful results in speech recognition dataset and this work brought advent of knowledge transfer, an area in the representation learning that aims performance improvements by training a weak student network by giving various forms of knowledge of expert teacher networks (Huang & Wang, 2017). It is also categorized as one family of model compression (Kim et al., 2018), since it helps the student network achieve higher accuracy with fixed number of parameters given.
The recent knowledge transfer algorithms can be approximately categorized in two ways. The first way is whether to use an ensemble model as a teacher or a single high-capacity model as a teacher. The second is whether to transfer the teacher’s prediction or the information from feature maps. Whereas the methods that use teacher’s prediction can use both types of teachers, to the best of our knowledge, methods using feature-map-level information only use a single high-capacity model as a teacher. For example, studies of Factor Transfer (FT) (Kim et al., 2018), Attention Transfer (AT) (Zagoruyko & Komodakis, 2016a), and Neuron Selectivity Transfer (NST) (Huang & Wang, 2017) set
the student network as a shallow network with a small-sized parameters, and set the teacher network as a deeper and more powerful network instead of an ensemble expert. One of the drawbacks of methods with a high-capacity teacher model is that high-capacity model may be hard to obtain (Lan et al., 2018).
On the contrary, the ones which use an ensemble of networks can transfer ensemble knowledge and also have an advantage of peer-teaching framework (Hinton et al., 2015; Lan et al., 2018; Furlanello et al., 2018). Also, Zhang et al. (2017) showed that using one same type of network for transferring output-level knowledge can improve performance of a network. However, the methods that delivers knowledge at the feature-map level also has advantages that they can give more specific information to the student compared to the methods that only rely on the output predictions of the teacher.
To make full advantages of both ensemble teacher method and feature map methods, we came up with a new framework that delivers knowledge of multiple networks at the feature-map level. In this paper, we train a new student network using the same type of teacher networks using the scheme of FT, a decent knowledge transfer algorithm that transfers knowledge using auto-encoding factors. We utilized the translator of FT to transfer ensemble knowledge, introducing two different kinds of ensemble-effective training algorithms:
• We recursively set the trained student network as a new teacher network which helps training a new student, accumulating knowledge in ensemble at the feature map level.
• We transfer knowledge to the student from multiple teachers simultaneously, expecting the student network to learn ensemble knowledge at the feature map level.
The paper is organized as follows. First, we briefly explain the related works including the method FT. Then a couple of more advanced versions of FT are proposed. Next, we verify our proposed methods with experiments. Experimental results from our proposed training methods are compared with the case of KD on CIFAR-100 and ImageNet datasets. Finally, we compare our method with other kinds of recent knowledge transfer algorithms.
2 RELATED WORKS
Many researchers studied the ways to train models other than using a purely supervised loss. In early times of these studies, Model Compression (Bucila et al., 2006) studied the ways to compress information from ensemble models in one network. More recently, Hinton et al. (2015) proposed Knowledge Distillation (KD), which uses softened softmax labels from teacher networks when training the student network, and motivated many researchers to develop many variants of it to various domains. The paper has been applied to transfer learning, and also led the advent of knowledge transfer with applications to many domains.
FitNet (Romero et al., 2014) applied KD to train a student network with different capacity to find better trade off between its accuracy and time cost. Attention Transfer (Zagoruyko & Komodakis, 2016a) tried to transfer the attention map of the teacher network to the student network, and got meaningful results in knowledge transfer and transfer learning tasks. Huang & Wang (2017) also tried to match the feature of the student network and teacher network devising a loss term MMD (Maximum Mean Discrepancy). Yim et al. (2017) introduced another knowledge transfer technique for faster optimization and applied it also for transfer learning.
Differently from previous works, Abdulnabi et al. (2018) and Tang et al. (2016) tried to transfer information in RNNs. Sharing knowledges were applied to even NAS (Neural Architecture Search) (Zoph & Le, 2016), making ENAS (Efficient NAS) (Pham et al., 2018) to overcome the heavy time cost. Tarvainen & Valpola (2017) used Mean Teachers to train a student network by a semi-supervised manner, and Radosavovic et al. (2017) expanded the semi-supervised learning to omni-supervised learning by proposing Data Distillation, a more applicable version of knowledge distillation. Recently, Transparent Model Distillation (Tan et al., 2018) tried to investigate the model distillation for transparency.
3 FACTOR TRANSFER
FT (Kim et al., 2018) is one of knowledge transfer methods that uses a pair of paraphraser and translator as a mediator of the teacher and the student networks. The paraphraser and the translator are attached to the last convolution layer of the teacher and the student networks, respectively and the translator is trained in a way that its output assimilates the output of the paraphraser.
3.1 TEACHER FACTOR EXTRACTION WITH PARAPHRASER
The paraphaser is trained in an unsupervised manner to map the teacher’s knowledge into another form so that the student network understands the knowledge more easily. It is designed with several convolution layers and transposed-convolution layers, in a similar way to autoencoders. The teacher’s paraphrased knowledge, called teacher factor (FT ), is output of the last convolution layer in the paraphraser. The paraphraser is trained with simple reconstruction loss function,
Lrec = ‖x− P (x)‖2, (1)
where the paraphraser P (·) takes the featuremap x as an input. The convolution and the transposedconvolution layers in the prarphraser are designed to make the spatial dimension of the teacher factor the same as that of the input, but the depth (number of channels) of the teacher factor can be made different with that of the input with the a rate of k.
3.2 FACTOR TRANSFER WITH TRANSLATOR
The translator is designed with several convolution layers, whose output is called student factor (FS). The student network and the translator are trained simultaneously with the the combined loss consisting of the conventional cross entropy loss Lcls and the factor transfer loss LFT as follows:
Lstudent = Lcls + βLFT , (2) LFT = ∥∥∥∥ FT‖FT ‖2 − FS‖FS‖2 ∥∥∥∥p p , (3)
where ‖A‖p is the p-norm of the vectorized version of a tensor A. We train the translator to output FS that mimics FT . The training is done in an end-to-end manner so that the gradients from the translator also affect the student network. The translator is used only in the training phase, which means that the factor transfer method does not affect the computation at test time.
4 PROPOSED TRAINING ALGORITHMS
In deep CNNs, due mainly to the curse of dimensionality, the data points that lie on the data space are very sparse, and this phenomena can be easily detected by applying algorithms like knearest neighbors. Necessarily, decision boundaries that determine the borders dividing classes are multitudinous, because finding boundaries that fit well to a training dataset is relatively an easy task. Even if the networks with the same architecture are trained, the learned decision boundaries cannot be the same. This is why ensemble methods usually perform better than a single model despite of their structural equality. Goodfellow et al. (2016) also state that different models will not make all the same errors on the test dataset.
Consider the conditions that determine the training procedure of CNNs. They include the structure of the CNN and the choice of an optimizer, random initialization, the sequence of mini-batch, and the types of data augmentations. If you make the same conditions for two different CNNs, their training procedure will be identical. However, we usually determine only the structure of the CNN, usually keeping the others to be random. Consequently, two networks with the same structure will definitely not learn the same decision boundaries.
Additionally, Kim et al. (2018) stated that they resolve the ‘inherent difference’ between two networks. Among the inherent differences, minimizing the differences in the structure of CNNs can help better
learn the knowledge of the teacher network. This motivation provides us chances to propose several modified versions of existing methods. In this section, we explain the two feature-level ensemble training algorithms that we use for boosting the performance of a student network without introducing any additional calculations at test time. The proposed methods are collectively called as FEED which is an abbreviation for the Feature-level Ensemble Effect for knowledge Distillation.
4.1 SEQUENTIAL FEED
Taking the advantage that the structure of the teacher network and the student network are the same, we propose a learning method that can accumulate and assemble knowledge by performing knowledge transfer several times. We name this algorithm as sFEED (sequential FEED), and the training procedure is illustrated in Figure 1. The paraphraser is omitted for sFEED, because the inherent difference which FT mentioned on their paper is extenuated by choosing the same type of teacher network and the student network. Additionally, omitting the paraphraser simplifies the training procedure. Omitting it not only eliminates the ambiguity of the choices of hyper-parameters, but also reduces the three-stage training procedure of FT into two stages.
Since the teacher and the student networks share the same architecture, problems due from different types of architectures being used for the teacher and the student do not occur here. A similar approach has been introduced in the paper of Furlanello et al. (2018), using output predictions. If the student network is trained standalone, it would perform similar to the teacher network. But from the view of knowledge ensemble, since the teacher network delivers feature-level knowledge different from that of the student network, the student network will benefit from it.
4.2 PARALLEL FEED
Assuming that giving knowledge in feature map level and giving knowledge in ensemble way both have their advantages, we wanted to make cooperation of these two kind of methods. Tackling this problem, we propose a training structure named pFEED (parallel FEED) to transfer the ensemble knowledge at the feature map, and is our second main proposed method. In this way, we take advantage of both the label-based method of delivering the information of the ensemble model and the feature-based method of delivering more specific information than the label-based method. The proposed algorithm is compared with KD which distills the knowledge of teacher network with ensemble label. Our algorithm is shown in Figure2.(a), and the training method of KD is shown in Figure2.(b). Consistent with what sFEED did, we omitted the paraphraser also in pFEED. Unlike the original FT, we use multiple teachers with multiple translators. If we use k different teachers, the loss term will be like following.
Lstudent = Lcls + β k∑
n=1
LFTn , (4)
LFTn = ∥∥∥∥ xn‖xn‖2 − FS‖FS‖2 ∥∥∥∥ 1 . (5)
LFTn is the FT loss from nth teacher network, and xn is the output feature map obtained from nth teacher network.
5 EXPERIMENTS
In this section, we first show the classification results on CIFAR-100 (Krizhevsky et al.). Second, we explore the feasibility of our algorithm on Imagenet (Russakovsky et al., 2015), a commonly used large dataset. For both datasets, we apply our algorithm and compare the results with KD, and analyze the results quantitatively. Third, we compare our results the state-of-the-art model on CIFAR-100. In the remaining section, we show some analysis on our algorithm and explain the experimental details.
We chose 3 types of CNNs to check the applicability of our algorithms on CIFAR-100: ResNet (He et al., 2016), Wide ResNet (Zagoruyko & Komodakis, 2016b), and RexNext (Xie et al., 2017). For ResNets, we chose ResNet-56 and ResNet-110 which have fewer number of parameters compared to recent CNNs, and WRN28-10 is a model that controls the widen factor, with much more number of parameters. WRN28-10 model achieves the best classification accuracy on CIFAR-100 among the WRNs reported on their parer. The ResNext29-16x64d also achieves the best classification accuracy on CIFAR-100 in their parer, and this type of CNNs controls the cardinality of CNNs and it has much more parameters compared to other models. For ImageNet, we used ResNet-34 to confirm the feasibility on large scale datasets.
5.1 SEQUENTIAL FEED
The classification results of our algorithm on CIFAR-100 can be found on Table 1. The word ‘Stack’ on Table 1 is the number of recursions that the student model is trained. Table 2 shows the results using the paraphraser. Although the existence of the paraphraser seems to affect the performance, omitting it seemed better for stronger networks. Additionally, as mentioned earlier, omitting the paraphraser simplifies the training procedure.
In many cases, the classification accuracy improves as the number of stacks increases. Though they have some fluctuations, it might be possible to achieve higher accuracy. We only experimented up to 5 times since all of them achieves fairly good enough accuracy compared to baseline models. Note that even though we train translators on the student side, they are not counted in Params column because it is not used at the test phase.
The results of sFEED for ImageNet is on Table 3. For the base model, we simply used the pre-trained model that Pytorch supplies, and could achieve a desired result that the performance of Top-1 and Top-5 accuracy improves at each stack.
5.2 PARALLEL FEED
In experiments on pFEED, we used the same type of networks that were used on previous experiments. For all 4 types of CNNs, we compared the classification results with the result of KD because we designed our training algorithm with intention of receiving more ensemble-like knowledge from multiple teachers. The difference between pFEED and KD is that KD ensembles the knowledge on the output level, and our proposed method try to draw the effect of ensemble on feature-map level, for delivering more specific information. The results are on Table 4.
The ‘Scratch’ column shows the performance of the base networks, used in KD for model ensemble, and also used as teachers in pFEED. For all experiments, pFEED consistently got higher accuracy compared with KD. It is worth noting that the performance of KD is almost equivalent to pFEED for small networks, but as it comes to the networks with larger number of parameters, pFEED shows better accuracy compared to KD. This result matches the hypothesis that out delivering feature-map-level information will provide more specific information to the student.
The results of pFEED for ImageNet is on Table 5. We also could find some accuracy improvements on ImageNet dataset, but did not have enough resources to train models with larger parameters, and used only three teachers. We could get decent results, but improvements are not strong as those of sFEED on ImageNet.
5.3 QUALITATIVE ANALYSIS
Reconstruction Loss: A paraphraser in the FT can be interpreted as a convolutional autoencoder in that it uses convolution and transposed convolution layers with a reconstruction loss, then the factor can be interpreted as a latent vector z. Supposing that the reason for the accuracy gains shown in the previous tables is that the student learns the ensemble knowledge, student network is forced to learn information with high complexity. Let us denote the input of paraphraser as x. The increase in the complexity of feature representation is equivalent to the increase in the complexity of x. In
FEED training, since the number of parameters in the paraphraser is fixed, the size of z also should be fixed. Consequently with complexity of x increasing, p(x|z) decreases, resulting in the increase of the reconstruction loss.
For both of our proposed training methods, we recorded the average training reconstruction losses of the paraphrasers normalized by the size of the paraphraser and plotted the curve on Figure 3 and Figure 4 (though we do not actually use paraphrasers for FEED training). In Fig. 3, Ph1 through Ph4 are the paraphrasers trained based on the student networks of Stack1 through Stack4 in Table 1, and the paraphrasers in Figure 4 is trained based on one of the scratch teacher networks and following student network of pFEED in Table 4. As expected, as the knowledge is transferred, the reconstruction loss becomes larger which indicates that the student network learns more difficult knowledge and thus the classifier accuracy increases. This trend surprisingly matches the results on the tables. In Figure 3, the legends matches the trends of ResNet-56 row in Table 1 of sFEED (especially, Stack 2 and 3 have similar errors and likewise, Ph2 and Ph3 are similar), and the big legend in Figure 4 also follows the high performance increase in WRN28-10 row of Table 4.
5.4 IMPLEMENTATION DETAILS
CIFAR-100: In the student network training phase, we used l1(p = 1) loss and the hyper-parameter β in eq.2 was set to 500 for ResNets and 2,000 for WideResNet and ResNext. We tried to set the training procedure to be the same as that of the original paper. For ResNets, we set the initial learning rate to 0.1 and decayed the learning rate with rate of 0.1 at 80, 120 epochs, and training finished at 160 epochs. For WideResNets, we set the initial learning rate to 0.1 and decayed the learning rate with rate of 0.2 at 60, 120, 160 epochs, and ended the training at 200 epochs. For ResNexts, we set the initial learning rate to 0.1 and decayed the learning rate with rate of 0.1 at 150, 225 epochs, and training finished at 300 epochs. For all the experiments, simple SGD is used as an optimizer, with momentum of 0.9 and weight decay of 5× 10−4, and mini-batch size of 128. The ResNets and WideResNets were trained on single Titap XP and the ResNexts were trained on four 1080 ti GPUs. The same setting was applied for translators with 3 convolution layers, since the translators were trained jointly with the student network.
Paraphraser: All the paraphrasers used in Table 2 are trained for 10 epochs, with learning rate of 0.1. The original paper states that the paraphrasers are trained for 30 epochs, but training it over 10 epochs was unnecessary. All paraphrasers has 3 convolution layers, and since the teacher network and student networks are the same kind, we set paraphrase rate k to be 1 for simple implementation. According to the paper of FT, the choice of it did not seem to affect the performance much.
ImageNet: The hyper-parameter β was set to 1,000, and following the training schedule of the Pytorch framework, train starts with learning rate of 0.1 and decays by the factor of 0.1 at 30, 60 epochs and finishes at 90 epoch, with mini-batch size of 256. All other conditions are set to be the same as the setting of CIFAR-100.
Setting of β: For the ease of re-producing, the choice of β is important. Supposing that we use LFT with l1(p = 1), the the loss scale of LFT depends on the number of nodes. Empirically, if one of the scales of either Lcls or LFT is dominant, the accuracy diminishes compared to the even case, so we approximately adjusted the scale of LFT , resulting in different βs in different networks. Hyperparameters of KD: We set the Temperature T for softened softmax to 4 as in KD (Hinton et al., 2015)
5.5 COMPARISON WITH KNOWLEDGE TRANSFER METHODS
The comparison with recent state-of-the-art knowledge distillation methods are on Table 6. The scores on ONE methods are from the paper of Lan et al. (2018), and the scores on the BAN methods are from the paper of Furlanello et al. (2018). Both methods use final predictions to transfer knowledge, while our method uses output feature maps. Unfortunately, we could not prepare all variations of networks and hyper-parameters for fair comparison, but could achieve decent performances.
6 CONCLUSION
In this work, we proposed a couple of new network training algorithms referred to as Feature-level Ensemble Effect for knowledge Distillation (FEED). With FEED, we can improve the performance of a network by trying to inject ensemble knowledge to the student network. The first one, sequential FEED recursively trains the student network and incrementally improves performance. The second one, parallel FEED trains the student network using multiple teachers simultaneously. The qualitative analysis with reconstruction loss gives hints about the cause of accuracy gains. The main drawback is the training times needed for multiple teachers which is an inherent characteristics of any ensemble methods, and pFEED causes bottleneck by feeding inputs to multiple teachers. Consequently, applying it more efficiently will be our future work, together with application to other domains. | 1. What is the main contribution of the paper, and how does it relate to previous works in the field?
2. What are the strengths and weaknesses of the proposed methods, Sequential and Parallel-FEED?
3. How does the reviewer assess the novelty of the work, and what are some potential concerns regarding its originality?
4. What are some issues with the experimental results presented in the paper, and how could they be improved?
5. Are there any suggestions for additional experiments or comparisons that could enhance the paper's validity and impact?
6. How does the reviewer evaluate the writing quality and clarity of the paper, and are there any specific areas that could be improved?
7. Based on the review, what is the overall recommendation for the paper's publication, and what changes would be required to make it more suitable for acceptance? | Review | Review
In this paper, the authors present two methods, Sequential and Parallel-FEED for learning student networks that share architectures with their teacher.
Firstly, it would be a good idea to cite https://arxiv.org/abs/1312.6184, it precedes knowledge distillation and is basically the same thing minus a temperature parameter and a catchy name.
The paper could do with some further grammar/spell checks.
It isn't clear to me where the novelty lies in this work. Sequential-FEED appears to be identical to BANs (https://arxiv.org/abs/1805.04770) with an additional non-linear transformation on the network outputs as in https://arxiv.org/abs/1802.04977. Parallel-FEED is just an ensemble of teachers; please correct me if I'm wrong.
The experimental results aren't convincing. There aren't any fair comparisons. For instance, in table 6 a WRN-28-10(sFEED) after 5 whole training iterations is compared to a WRN-28-1(BAN) after 1. It would be good to run BAN for as many iterations. A comparison to attention transfer (https://arxiv.org/abs/1612.03928) would be ideal for the ImageNet experiments. Furthermore, if one isn't interested in compression, then Table 4 indicates that an ensemble is largely preferable.
This work would benefit from a CIFAR-10 experiment as it's so widely used (interestingly, BANs perform poorly on CIFAR-10), also a task that isn't image classification would be helpful to get a feel of how the method generalizes.
In summary I believe this paper should be rejected, as the method isn't very novel, and the experimental merits are unclear.
Pros:
- Simple method
- Largely written with clarity
Cons:
- Method is not very novel
- No compared thoroughly enough to other work |
ICLR | Title
Class Prototype-based Cleaner for Label Noise Learning
Abstract
Semi-supervised learning based methods are current SOTA solutions to the noisylabel learning problem, which rely on learning an unsupervised label cleaner first to divide the training samples into a labeled set for clean data and an unlabeled set for noise data. Typically, the cleaner is obtained via fitting a mixture model to the distribution of per-sample training losses. However, the modeling procedure is class agnostic and assumes the loss distributions of clean and noise samples are the same across different classes. Unfortunately, in practice, such an assumption does not always hold due to the varying learning difficulty of different classes, thus leading to sub-optimal label noise partition criteria. In this work, we reveal this long-ignored problem and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC). Unlike previous works treating all the classes equally, CPC fully considers loss distribution heterogeneity and applies class-aware modulation to partition the clean and noise data. CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously and thus can better distinguish clean and noise labels. We theoretically justify the effectiveness of our method by explaining it from the Expectation-Maximization (EM) framework. Extensive experiments are conducted on the noisy-label benchmarks CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results show that CPC consistently brings about performance improvement across all benchmarks.
1 INTRODUCTION
Deep Neural Networks (DNNs) have brought about significant progress to the computer vision community over past few years. One key to its success is the availability of large amount of training data with proper annotations. However, label noise is very common in real-world applications. Without proper intervention, DNNs would be easily misled by the label noise and yield poor performance.
In order to improve the performance of DNNs when learning with noise labels, various methods have been developed (Liu et al., 2020; Li et al., 2020a; Reed et al., 2014; Nishi et al., 2021). Among them, semi-supervised learning based methods (Nishi et al., 2021; Li et al., 2020a) achieve the most competitive results. The semi-supervised learning methods follow a two-stage pipeline. They first model the loss distribution of training samples to construct a noise cleaner based on the “small-loss prior” (Han et al., 2020), which says in the early stage of training, samples with smaller crossentropy losses are more likely to have clean labels. The prior is widely adopted and demonstrated to be highly effective in practice (Han et al., 2020). Given the noise cleaner, the training samples are divided into a labeled clean set and an unlabeled noise set. Then, semi-supervised learning strategies like MixMatch (Berthelot et al., 2019) are employed to train DNNs on the divided dataset.
The key to their performance lies in the accuracy of the label-noise cleaner (Cordeiro et al., 2022). Usually, a single Gaussian Mixture Model (GMM) (Li et al., 2020a) is used to model the loss distribution of all the training samples across different categories. However, this modeling procedure is class-agnostic, which assumes a DNN model has the same learning speed to fit the training samples in different categories, thus the same loss value on samples in different categories can reflect the same degree of noise likelihood.
Unfortunately, such assumption does not hold in practise. In Fig. 1, we present the cross-entropy loss distribution of training samples at the end of DNNs warm-up period. We conduct Kolmogorov-
Smirnov test (Massey Jr, 1951) to quantify the loss distribution difference between the samples in each class and samples in the whole dataset. The results show that for 54% categories in CIFAR-100 under 90% symmetric noise, the p-value is lower than 0.051 for the hypothesis test that the probability distribution of clean samples in the class is the same with the probability distribution of clean samples in the whole dataset, while the number in the case of noise samples is 53%. Therefore, the class-agnostic label noise cleaner, which establishes a overly rigid criterion shared by all the classes, would introduce more noise samples to the clean set while reject clean samples, and consequently get the model perform poorly. A straightforward remedy to the problem is to fit distinct GMMs to losses of samples in different classes respectively, yielding a class-aware GMM cleaner. Nevertheless, this class-aware modeling strategy implicitly assumes that label noise is existed in every class. In the case of asymmetric noise e.g., CIFAR10-asym40%, where samples in parts of classes are clean, such a naive strategy would classify most of hard samples in the clean classes as noise, and results in negative affect on model training.
Considering that images in the same category should share similar visual representations, the similarity between a sample and the cluster center (e.g., class prototype) of its labeled class is helpful for recognizing label noise. In this paper, we propose a simple Class Prototype-based label noise Cleaner (CPC) to apply class-aware modulation to the partitioning of clean and noise data, which takes advantage of intra-class consistency regularization in feature space and loss distribution modeling, simultaneously. CPC learns embedding for each class, i.e., class prototypes, via intra-class consistency regularization, which urges samples in the same class to gather around the corresponding class prototype while pushes samples not belonging to the class away. Unlike the aforementioned naive class-aware GMM cleaner, CPC apply class-aware modulation to label noise partitioning via representation similarity measuring without assuming that label noise is existed in every class, which is more general for different label noise scenarios. Meanwhile, CPC leverages the “small-loss prior” to provide stronger and more robust supervision signals to facilitate the learning of prototypes.
We plug CPC to the popular DivideMix(Li et al., 2020a) framework, which iterates between label noise partitioning and DNNs optimization. With the stronger label noise cleaner in the first stage, DNNs can be trained better in the second stage, which would further improve the learning of class prototypes. We theoretically justify the procedure from Expectation-Maximization algorithm perspective, which guarantees the efficacy of the method. We conduct extensive experiments on multiple noisy-label benchmarks, including CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results clearly show that CPC effectively improves accuracy of label-noise partition, and brings about consistently performance improvement across all noise levels and benchmarks.
The contribution of our work lie in three folds: (1) We reveal the long-ignored problem of classagnostic loss distribution modeling that widely existed in label noise learning, and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC); (2) CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space si-
1A p-value < 0.05 suggests the probability that the class-wise loss distribution are the same with the global loss distribution is lower than 5%.
multaneously, which can better distinguish clean and noise labels; (3) Extensive experimental results show that our method achieves competitive performance compared to current SOTAs.
2 RELATED WORK
Recent advances in robust learning with noisy labels can be roughly divided into three groups. (a) Label correction methods aim to translate wrong labels into correct ones. Early studies rely on an auxiliary set with clean samples for clean label inference (Xiao et al., 2015a; Vahdat, 2017; Li et al., 2017b; Lee et al., 2018). Recent efforts focus on performing label correction procedures without supervision regarding clean or noise labels. (Yi & Wu, 2019a; Tanaka et al., 2018) propose to jointly optimize labels during learning model parameters. Li et al. (2020b) propose to correct corrupted labels via learning class prototypes and utilize the pseudo-label generated by measuring the similarity between prototypes and samples to train model. Wu et al. (2021) and Li et al. (2021) introduce neighbouring information in feature space to correct noise label, and propose a graphbased method and a class prototype-based method, respectively. (b) Sample selection methods select potential clean samples for training to eliminate the effect of noise labels on learning the true data distribution. (Han et al., 2018; Jiang et al., 2018; 2020; Yu et al., 2019) involve training two DNNs simultaneously and focus on the samples that are probably to be correctly labeled. (c) Semisupervised learning methods conceal noise labels and treat these samples as unlabeled data (Ding et al., 2018). DivideMix (Li et al., 2020a) is a typical algorithm among these works, which compromises an unsupervised label noise cleaner that divides the training data to a labeled clean set and an unlabeled noise set, followed by semi-supervised learning that minimize the empirical vicinal risk of the model. Inspired by DivideMix, a series of methods (Cordeiro et al., 2022; Nishi et al., 2021; Cordeiro et al., 2021) are proposed, which achieve SOTA performance. However, all these methods rely on the class-agnostic loss distribution modeling to achieve the label noise cleaner, which hinders the performance of the model. The class-agnostic loss distribution modeling implicitly assumes a DNN model has the same learning speed to memory training samples in different categories. However, in reality, the memorization speed are actually different and will cause the the problem of under learning in hard classes as revealed by Wang et al. (2019). In this paper, we focuses on another problem, i.e., class agnostic loss distribution modeling problem caused by the issue in the context of label noise cleaner. In our method, we propose the simple yet effective class prototype-based label noise cleaner to solve the problem. Besides, compared to previous prototype-based label noise learning methods (Li et al., 2020b; 2021), our method are different from them in two folds: (1) we utilize prototypes as label noise cleaner to effectively improve the semi-supervised learning based methods; (2) CPC takes advantage of both loss distribution modeling and intra-class consistency regularization in feature space simultaneously which learns better prototypes.
3 PRELIMINARY
In label noise learning, given a training set D = (X,Y ) = {(xi, yi)}Ni=1, where xi is an image and yi ∈ {1, 2, ...,K} is the annotated label over K classes, the label yi could differ from the unknown true label ŷi. In this paper, we follow the popular label noise learning framework DivideMix (Li
et al., 2020a), which first warms up the model for a few epochs by training on all the data using the standard cross-entropy loss, and then trains the model by iterating a two-stage pipeline. The pipeline comprises an unsupervised label cleaner Q to divide training samples into a labeled set for clean data X and an unlabeled set for noise data U , followed by a semi-supervised learning stage that trains the model to minimise the empirical vicinal risk (EVR) (Zhang et al., 2017):
ℓEV R = 1 |X ′| ∑ X ′ ℓX ′(p(ỹ′i|x ′ i), y ′ i) + λ |U ′| ∑ U ′ ℓU ′(p(ỹ′i|x ′ i), y ′ i), (1)
where X ′ and U ′ indicate MixMatch (Berthelot et al., 2019) augmented clean and noise set. lX ′ and lU ′ denote the losses for samples in set X ′ and U ′, which are weighted by λ. p(ỹ′i|x′i) is the softmax output of DNNs, where ỹ′i is the predicted label. For more details about EVR, please refer to the appendix A.1.
In Li et al. (2020a), the unsupervised label cleaner is operated under the “small-loss prior”, which is widely adopted and demonstrated to be highly effective (Han et al., 2020). The prior assumes that in the early stage of training, samples with smaller cross-entropy losses are more likely to have clean labels. The well known insight behind the “small-loss prior” is that DNNs tend to learn simple patterns first before fitting label noise (Arpit et al., 2017). Given a training sample xi and the softmax output p(ỹi|xi) of DNNs, where ỹi is the predicted label, the cross-entropy loss l(p(ỹi|xi), yi) reflects how well the model fits the training sample.
To achieve the unsupervised label cleaner Q, a two-component Gaussian Mixture Model (GMM) is employed to fit the loss distribution of all training samples, i.e., ℓ(p(ỹi|xi), yi) ∼ ϕ0N (µ0, σ0) + ϕ1N (µ1, σ1), where µ0 < µ1, and ϕ is a mixing coefficient. The component with smaller mean represents the distribution of clean samples and the other one is for noise samples. We use zi ∈ {0, 1} indicates the data is clean or not. Then, q(zi = 0) represents the clean probability of xi, which is the posterior probability of its loss belonging to the clean component. The label cleaner is shared by training samples across different classes, which is actually class-agnostic. A hypothesis implicitly accompanying this loss distribution modeling method is ignored by current works, which assumes the loss distributions of clean and noise samples are consistent across different categories. Unfortunately, as illustrated in Fig.1, the hypothesis dose not hold in practise. In this paper, we propose the class prototype-based label noise cleaner which applies class-aware modulation to the partitioning of clean and noise data and improves label noise learning.
4 METHODOLOGY
4.1 OVERVIEW
Our method follows the two-stage label noise learning framework DivideMix (Li et al., 2020a) and improves the framework with the proposed CPC. CPC comprises class prototypes C = {ck ∈ R1×d|k = 1, 2, ...,K}, where ck indicates the prototype of k-th class and d is the dimension of prototype embedding. Our DNN model consists of a CNN backbone, a classifier head and a projection layer. The backbone maps an image input xi to a feature vector vi ∈ R1×D. The classifier takes vi as input and outputs class prediction p(ỹi|xi). The projection layer serves to project the high dimension feature vi to a low-dimensional embedding v′i ∈ R1×d, where d < D. As shown in Fig. 2, we update the DNN as well as the CPC by iterating a two-stage training pipeline in every epoch. In the first stage, we update CPC as well as the projector in DNN, and utilize the updated CPC to partition label noise. We first calculate the cross-entropy loss of every training sample and fits a GMM to the losses. We utilize the GMM as a label noise cleaner to get a labeled clean set XGMM and a unlabeled noise set UGMM . The data partition XGMM and UGMM are utilized to update the prototypes in CPC and parameters in the projector. Note that we cut off the gradient back-propagation from the projector to the CNN backbone. Then, the updated CPC is employed to re-divide the training data into another two set X and U . In the second stage, we train DNN model to minimise the EVR in Eq. (1) with data partitioned by the cleaner. In the first e epochs, we wait CPC to warm up, and minimise the EVR of DNNs based on training data partitioned by the GMM cleaner. After the e-th epoch, the label noise estimation results of CPC, i.e., X and U are employed to train DNNs, while the estimation results of GMM cleaner are only used to update
prototypes in CPC. In inference, we utilize DNN classifier for image recognition, directly. In A.5, we further delineate the full framework.
4.2 CLASS PROTOTYPE-BASED LABEL NOISE CLEANER
In order to apply class-aware modulation to the label noise partitioning, we propose to learn an embedding space where samples from the same class are aligned with their class prototypes, and leverage the prototypes to recognize noise labels. The prototypes are typically learnt with intra-class consistency regularization, which urges samples in the same class to align with the corresponding class prototype while keeping samples not belonging to the class away. Previous methods (Wang et al., 2022; Li et al., 2020b) apply the intra-class consistency regularization to prototype learning via unsupervised contrastive objectives, e.g., prototypical contrastive objective (Li et al., 2020c), where the unsupervised training labels are typically determined by the similarity between samples and prototypes. The accuracy of the training labels are highly depends on the quality of representation learnt by the CNN encoder, which would be too low to effectively update the prototypes, especially in the early stage of training. In contrast, we empirically find that the GMM cleaner, which is operated under the well evaluated “small-loss prior”, are not as sensitive as the prototypes to the representation quality, and can provide more robust and accurate training labels.
Therefore, we propose to take samples in clean set XGMM as positive samples and those in noise set UGMM as negative samples to update prototypes. Specifically, given the feature embedding v′i of a sample xi from XGMM, we update prototypes C as well as the parameters of the projector to maximize the score q(zi = 0) between ck=yi and v ′ i, and minimize the score between ck ̸=yi and v ′ i via minimize LXGMM :
LXGMM = − 1 |XGMM | ∑
XGMM K∑ k=1 ℓk(v ′ i, yi), where
ℓk(v ′ i, yi) =
{ log(sigmoid(v′ic ⊤ k )), k = yi,
λneg log(1− sigmoid(v′ic⊤k )), k ̸= yi,
(2)
where λneg = 1K weights the losses between positive pair and negative pairs to avoid under-fitting the positive samples. Given v′i of a sample xi from UGMM, we update prototypes ck as well as the parameters of the projector to minimize the score q(zi = 0) between ck=yi and v ′ i via minimizing LUGMM :
LUGMM = − 1 |UGMM | ∑
UGMM log(1− sigmoid(v′ic⊤k )), where k = yi. (3)
At last, for noise samples in UGMM with high classification confidence, the samples are more likely to belong to the class predicted by DNNs, which is potentially valuable to the update of prototypes. Therefore, we collect such training samples XP from UGMM taking the averaged classification confidence of samples in XGMM as the threshold. Specifically, given a sample in UGMM with the label predicted by DNNs k = maxk(p(ỹi|xi)), the sample is collected into XP if p(ỹi|xi)k > average({p(ỹi|xi)k|(xj , yj |yj = k) ∈ XGMM}). Then, we update the prototypes and projectors to minimize L(XP ):
LXP = − 1 |XP | ∑ XP log(sigmoid(v′ic ⊤ k )),where k = max k (p(ỹi|xi)). (4)
The overall empirical risk LC for prototypes and the projector is as follows: LC = LXGMM + LUGMM + αLXP , (5) where α is the weight scalar.
CPC distinguishes a clean sample (xi, yi) with the score q(zi = 0) = sigmoid(v′ic ⊤ k=yi ) and the threshold τ . Samples with q(zi = 0) > τ are classified as clean, and otherwise as noise.
4.3 THEORETICAL JUSTIFICATION ON THE EFFICACY OF CPC
We provide theoretical justification on the efficacy of CPC from the perspective of ExpectationMaximization algorithm, which guarantees that though CPC does not follow the classical prototypical contrastive objective, it can still learn meaningful prototypes and act as an effective cleaner.
We consider training data with label noise D = (X,Y ) = (xi, yi) N i=1 as the observable data, and Z ∈ {0, 1}N as the latent variable, where zi = 0 iff (xi, yi) is clean (i.e., yi = ŷi). The prototypes C in the cleaner are taken as parameters expected to be updated. Then, the negative log likelihood for D given C is as follows:
NLL(D|C) = − ∑ D log ∑
zi∈{0,1}
p(xi, yi, zi|C) = − ∑ D log ∑
zi∈{0,1}
q(zi) p(xi, yi, zi|C)
q(zi) , (6)
where q(zi) = p(zi|xi, yi, C). According to the Bayes theorem and Jensen’s inequality , we have NLL(D|C) = − ∑ D log ∑
zi∈{0,1}
q(zi)p(xi, yi|C),
≤ − ∑ D ∑ zi∈{0,1} q(zi) log p(xi, yi|C)
= − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) + const,
(7)
where − ∑
D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) is the upper bound of NLL(D|C). Typically, we can
adopt the EM algorithm to find the prototypes C that minimize the upper bound by iterating:
E-step: Compute a new estimate of q(zi) (i.e., clean or noise) according to prototypes Cold from the last iteration: q(zi) = p(zi|xi, yi, Cold). (8) M-step: Find the prototypes C that minimizes the bound:
Cnew = argmin C − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) (9)
In our method, in order to introduce the “small-loss prior” to provide stronger and more robust supervision signals to the learning of CPC, in the E-step, we estimate the distribution of clean or noise of samples, denoted as q(z′i), via the GMM cleaner instead of q(zi) in Eq. (8). And consequently, we replace the q(zi) in Eq. (9) to q(z′i) and find the prototype C minimize the bound. Next, we provide the justification that the EM algorithm still work by proving that q(z′i) can be considered as an approximation to q(zi) in our framework.
In our method, q(z′i) = p(z ′ i|l(p(ỹi|xi), yi)), where ỹi ∼ p(ỹi|xi, θ), which is the label predicted by the DNN parameterized by θ. As introduced in section 4.1, in the first stage of each epoch, the CPC’s estimation results zi ∼ q(zi) are utilized to divide training samples into a labeled set for clean data X = {(xi, yi)|zi = 0} and an unlabeled set for noise data U = {(xi, yi)|zi = 1}. Then the parameters of DNNs, which we denote as θ, are optimized using Eq. (1) in the second stage. There exists an optimal θ∗ with respected to zi, with which the softmax output p(ỹi|xi) of DNNs satisfies:
ℓ(p(ỹi|xi), yi) = 0, if zi = 0, otherwise 1, (10) where ℓ(p(ỹi|xi), yi) is the cross-entropy loss between the network prediction and the annotated label. With these loss values, the subsequent GMM cleaner can easily distinguish samples of X from samples of U . In other words, under the optimal θ∗, the estimation of the GMM cleaner would be consistent with the partition of CPC, i.e., z′i = zi. In practice, in each epoch, we takes the θ optimized to minimize Eq. (1) as an approximation to the optimal θ∗ with respect to zi, and consequently we can get q(z′i) as an approximation to q(zi). Therefore, we can see that with the “small loss prior” introduced into the prototype learning, the EM optimization procedure would still work, which guarantees CPC can learn meaningful prototypes and act as an effective cleaner. In appendix A.4, we further present more details and empirical results to demonstrate the approximation is hold in practice.
5 EXPERIMENTS
5.1 DATASETS AND IMPLEMENTATION DETAILS
Datasets. We evaluate our method on the following popular LNL benchmarks. For CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), we experiment with two types of synthetic noise: symmetric
and asymmetric, which are injected into the datasets following the standard setup in (Li et al., 2020a). Clothing1M (Xiao et al., 2015b) and WebVision1.0 (Li et al., 2017a) are two large-scale real-world label noise benchmarks. Clothing1M contains 1 million images in 14 categories acquired from online shopping websites, which is heavily imbalanced and most of the noise is asymmetric (Yi & Wu, 2019b). WebVision1.0 contains 2.4 million images crawled from the web using the concepts in ImageNet-ILSVRC12 (ILSVRC12). Following convention, we compare with SOTAs on the first 50 classes of WebVision, as well as the performance after transferring to ILSVRC12.
Implementation details. We plug the proposed CPC to the DivideMix (Li et al., 2020a) framework. For Clothing1M and CIFAR-10 with asymmetric noise, we employ a single class-agnostic GMM for loss-distribution modeling. For other cases, we find that class-aware GMMs would further improve the performance of CPC. Following DivideMix, we employ ResNet18 (He et al., 2016) for CIFAR10 and CIFAR-100, and utilize ImageNet pre-trained ResNet-50 for Clothing1M. Since previous works chose different backbones, e.g., Inception-resnet v2 (Szegedy et al., 2017) and ResNet-50, we adopt the weaker one, i.e., ResNet-50 according to (Zheltonozhskii et al., 2021), and train it from scratch for fair comparison. The threshold of CPC τ is set 0.5 by default for all the datasets except for the extremely imbalanced Clothing1M where it is set to 0.3. For CIFAR-10 and CIFAR100, we train the models for 450 epochs. For the large-scale dataset Clothing1M and WebVision1.0, we train the model for 80 and 100 epochs, respectively. The warm-up periods of prototypes for all the datasets is set to the first 5% epochs after network warm-up, except in CIFAR-100 with noise ratios larger than 80% when set to 10% of total epochs. For the other settings, we simply follow the standard set-up as in DivideMix. For more implementation details, please refer to the appendix A.2 and codes in supplementary materials.
5.2 COMPARISON WITH STATE-OF-THE-ART METHODS
Real-world noise benchmarks. We evaluate our method on real-world large scale data sets, and compare our method with latest SOTA label noise learning methods, including DivideMix(Li et al., 2020a), LongReMix(Cordeiro et al., 2022), NGC(Wu et al., 2021), GJS(Englesson & Azizpour, 2021), ELR+(Liu et al., 2020), AugDMix(Nishi et al., 2021) and NCR(Huang et al., 2021). For WebVision, we measure the top1 and top5 accuracy on WebVision validation set and ImageNet ILSVRC12 validation set. We take ResNet50-based DivideMix (Zheltonozhskii et al., 2021) as baseline. As shown in Table 1, our CPC improves top1 and top5 accuracy over baseline model on WebVision by 3.33% and 2.81%, respectively. Our method achieves competitive performance on WebVision, and shows stronger transferable capability, outperforming other competitors on the ILSVRC12 validation set significantly. For Clothing1M, we apply the strong augmentation strategy (Nishi et al., 2021) to DivideMix as our baseline, and rerun the method three times. Our method achieves 75.4% accuracy on this challenging benchmark, outperforming all the other SOTAs. We also notice that though NCR achieves SOTA result on WebVision, it shows moderate performance compared to ELR+, DivideMix and AugDMix on Clothing1M containing asymmetric noise with imbalanced data distribution. It reveals that our method could be more robust across different label noise scenarios.
Synthetic noise benchmarks. We evaluate the performance of CPC on CIFAR-10 and CIFAR-100 datasets with symmetric label noise level ranging from 20% to 90% and asymmetric noise of rate 40%. We take AugDMix as the baseline, and compare our method with latest SOTA methods, where DivideMix, LongReMix and Aug-DMix are semi-supervised learning based methods. Following NGC and GJS, we run our method three times with different random seeds and report the mean and standard deviation. For other methods, e.g., ProtoMix (Li et al., 2021), we report the best results reported in their papers. As shown in Table 2, though with a baseline method as strong as AugDMix, our method brings about performance improvement across all noise levels as well as noise types consistently, and establishes new SOTAs on CIFAR-10 and CIFAR-100. Additionally, we notice that, under asymmetric noise set-up, semi-supervised learning based methods consistently outperform other methods that achieve SOTA results on WebVision benchmark,including NGC, GJS and NCR. The results reveal that semi-supervised learning based method could be more robust to asymmetric noise, while our method achieves SOTA performance among them.
5.3 ANALYSIS
Is CPC a better label noise cleaner? We evaluate the performance of label noise cleaner under both symmetric and asymmetric label noise set-ups. For symmetric noise, we use CIFAR-100 with 90% noise as benchmark to reveal the relationship between CPC and the significant performance improvement under this set-up. For asymmetric noise, we employ the most commonly adopted CIFAR-10-asym40% as benchmark. The AUC of clean/noise binary classification results of a cleaner is calculated as the evaluation metric. We take the original class-agnostic GMM cleaner (GMMagn) proposed in DivideMix as baseline, and compare it to our CPC and the aforementioned naive class-aware GMM cleaner (GMMawr). Furthermore, we also implement another version of CPC that trained based on the class-aware GMM cleaner. To distinguish these two CPC, we denote the regular one trained based on conventional class-agnostic GMM cleaner as CPCagn, and the other one as CPCawr. As shown in Figure 3, in both cases, the regular CPCagn outperforms the baseline GMMagn as well as GMMawr, which demonstrates our class prototype-based method is the better label noise cleaner. As for the comparison between GMMagn and GMMawr, we find that in the situation of high symmetric noise, though GMMagn shows better performance in the early stage of training, GMMawr outperforms it in the second half stage of training. In the case of asymmetric noise, GMMawr, which tend to classify hard clean samples in clean categories as noise wrongly, consistently underperforms GMMagn across the whole training period. The results further prove that our class prototype-based method is the better choice for applying class-aware modulation to label noise cleaning, which is more robust across different noise types. Moreover, we find that in the case of asymmetric noise, CPCagn achieves higher AUC compared to GMMagn, which shows our method can partially make up for the shortcomings of GMMagn. In the case of symmetric noise, we find that GMMagn can further improve the performance of CPC, where CPCawr achieves the best performance among the four cleaners.
How do different label noise cleaners affect label noise learning? We plug different cleaners to DivideMix framework, and keep all the other training settings the same as described in the implementation details. As shown in Table 3, the final performance of the model is consistent with the performance of the cleaner used. On CIFAR-100 with 90% symmetric noise, performance improvement bought about by CPCagn are 7.68%, while model with CPCawr outperforms the baseline method by 13.4%. We also report the comparison results on large-scale WebVision dataset, where the performance of different models show the same trend of change as in CIFAR-100-sym90%. As for the asymmetric noise situation, i.e., CIFAR-10-asym40% and Clothing1M, model with CPCagn, which has superior label noise partitioning capability as shown in Fig.3, achieves best performance while CPCawr beat GMMawr in both cases. The results demonstrate that CPC is helpful to train a better model in label noise learning.
Is the GMM cleaner beneficial to the learning of prototypes? In our method, we propose to leverage the GMM cleaner to facilitate the learning of prototypes via the “small loss prior”. To validate the effectiveness of our method, we first compare the quality of prototypes learnt in CPC with prototypes learnt in another prototype-based label noise learning method MoPro (Li et al., 2020b). We take WebVision as benchmark and utilize prototypes to classify test samples via measuring the similarity between samples and prototypes. The results show that, on the first 50 classes of WebVision, our prototype achieves a top1 accuracy of 78.44%, while MoPro’s accuracy is 72.23%, which demonstrates that our method is able to learn better prototypes. To further verify the contribution of the GMM cleaner, we remove the GMM cleaner and learn class prototypes in CPC via the typical prototypical contrastive objective as in MoPro. In experiments, we find that without the help of the GMM cleaner, the learnt prototypes generate less accurate data partition that further drawing back the overall training framework for DNNs, which proves the benefits of the GMM cleaner to our method. For more details and discussion, please refer to A.3.
6 CONCLUSION
In this paper, we reveal the long-ignored problem of class-agnostic loss distribution modeling that widely existed in label noise learning, and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC). CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously, which can better distinguish clean and noise labels. We justify the effectiveness of our method by explaining it from the EM algorithm perspective theoretically and providing extensive empirical proves. The experimental results show that our method achieves competitive performance compared to current SOTAs.
A APPENDIX
A.1 EMPIRICAL VICINAL RISK
We introduce the Empirical Vicinal Risk following Cordeiro et al. (2022). In the semi-supervised learning based label noise learning framework, with the labeled set X and unlabeled set U from a cleaner, the DNNs are trained to minimise the empirical vicinal risk (EVR) (Zhang et al., 2017):
ℓEV R = 1 |X ′| ∑ X ′ ℓX ′(p(ỹ′i|x ′ i), y ′ i) + λ(U ′) |U ′| ∑ U ′ ℓU ′(p(ỹ′i|x ′ i), y ′ i), (11)
where lX ′ and lU ′ denote the losses for set X ′ and U ′, which are weighted by λ(U ′). X ′ and U ′ indicate MixMatch (Berthelot et al., 2019) augmented clean and noise set:
X ′ = (x′i, y′i) : (x′i, y′i) ∼ f(x′i, y′i|xi, yi), (xi, yi) ∈ X , U ′ = (x′i, y′i) : (x′i, y′i) ∼ f(x′i, y′i|xi, yi), (xi, yi) ∈ U ,
(12)
with
f(x′i, y ′ i|xi, yi) =
1 |X ∪ U| ∑ X∪U Eλ[δ(x′i = λxi + (1− λ)xj , y′i = λyi + (1− λ)yj)], (13)
where δ is a Dirac mass centered at (x′, y′), λ ∼ Beta(a, a), and a ∈ (0,+inf).
A.2 OTHER TRAINING DETAILS
A.2.1 TRAINING CONFIGURATIONS
In our method, we follow most of training set-up of DivideMix(Li et al., 2020a). We present the detailed training configures as follows:
• CIFAR-10 and CIFAR-100. For all the experiments on CIFAR, we train our DNN model as well as class prototypes in CPC via SGD with a momentum of 0.9, a weight decay of 0.0005, and a batch size of 128. The network is trained for 450 epochs. We set the initial learning rate as 0.02, and reduce it by a factor of 10 after 225 epochs. The warm up period for the DNN is 10 epochs. The weight λ(U
′) is set to {0,25,50,150} as in DivideMix. • Clthing1M. We train our DNN model as well as class prototypes in CPC via SGD with a
momentum of 0.9, a weight decay of 0.001, and a batch size of 32. The model is trained for 80 epochs. The warm up period for the DNN is 1 epoch. The initial learning rate is set as 0.002 and reduced by a factor of 10 after 40 epochs. For each epoch, we sample 1000 mini-batches from the training data. The weight λ(U ′) is set to 0.
• WebVision. We train our DNN model as well as class prototypes in CPC via SGD with a momentum of 0.9, a weight decay of 0.001, and a batch size of 32. The model is trained for 100 epochs. The warm up period for the DNN is 1 epoch. The initial learning rate is set as 0.01 and reduced by a factor of 10 after 50 epochs. For each epoch, we sample 1000 mini-batches from the training data. The weight λ(U ′) is set to 0.
A.2.2 HYPER-PARAMETER STUDY
In this paper, we mainly follow the tuning procedure as in DivideMix to determine the newly introduced hyper-parameters. First of all, we initialize the hyper-parameters to e = 5%, τ = 0.5, α = 1.
Then, for the large scale real world benchmark Clothing1M and WebVision, the hyper-parameter tuning is done on the validation set of Clothing1M and transferred to WebVision. For CIFAR, a small validation set with clean data is split from training data for hyper-parameter tuning. Due to the diversity of experimental set-ups, it would be an irritating task to tune hyper-parameters for each experimental set-up, respectively. Therefore, we only tune the hyper-parameters under CIFAR-100(sym80%) and CIFAR-100(sym50%), and transfer the hyper-parameters obtained under CIFAR-100(sym80%) to the noisier set-up i.e., CIFAR-100(sym90%), and those obtained under CIFAR-100(sym50%) to the less challenge set-ups i.e., noise ratio lower than 50% and all noise ratio on CIFAR-10.
In practical, when a clean validation set is inaccessible, it would be the difficult to tune the hyperparameters. To shed some light to the hyper-parameter set-up in these cases, we try to conclude some empirical solutions via studying the variation of performance of CPC with respect to the newly introduced hyper-parameters on different benchmarks. According to experimental results, we find that CPC is robust in the choice of hyperparameters in the range listed in Tab.4. Generally, e = 5%/10%, τ = 0.5, α = 0/1 can be a good choice in most cases.
A.3 DISCUSSION ON THE CONTRIBUTION OF GMM CLEANER TO CPC
In typical prototypical contrastive objective, the unsupervised training labels are determined by similarity between samples and prototypes. Compared to it, we empirically find that GMM cleaner provides more accurate training labels for prototypes, especially in the early stage of training. For example, in CIFAR-10(asym-40%), the averaged accuracy of training labels from GMM cleaner is 9.7% higher during the CPC warming up period.
To evaluate the contribution of GMM cleaner in our framework, we further present ablation study results in Tab. 5. For CPC w/o GMM Cleaner, we remove the GMM cleaner and learn class prototypes in CPC with prototypical contrastive objective as in MoPro (Li et al., 2020b). In experiments, we find that without the help of the GMM cleaner, the learnt prototypes generate less accurate data partition that further drawing back the overall training framework for DNNs as shwon in Tab. 5. The situation is especially severe on the challenging benchmark with more diverse data, e.g., WebVision. The results demonstrate the benefits of the GMM cleaner in our method.
To prove the superiority of our method, we also compare the quality of prototypes learnt in our method with prototypes learnt in MoPro (Li et al., 2020b) on the first 50 classes of WebVision. To evaluate the quality of prototypes learnt in CPC, we utilize the prototypes to classify test samples via measuring the similarity between samples and prototypes. We implement the experiment with the official code released by the MoPro team. The results show that our prototype achieves a top1 accuracy of 78.44%, while MoPro’s accuracy is 72.23%. The result demonstrates that our method is able to learn better prototypes.
A.4 SUPPLEMENTARY DISCUSSION ON THE THEORETICAL JUSTIFICATION
A.4.1 IS q(z′i) A PROPER APPROXIMATION TO q(zi) IN PRACTICAL?
In Section 4.3, we replace the estimation of CPC q(zi) in Eq. (9) with the estimation of GMM cleaner q(z′i) and justify q(z ′ i) can be considered as an approximate to q(zi). To investigate if the approximation holds in practical, we calculate the K-L Divergence as well as classification consistency between q(z′i) and q(zi). As shown in Figure 4, as the training going on, the KLD between q(z′i) and q(zi) is converged and the classification consistency increases.
A.4.2 TRAINING PROTOTYPES WITH LC IS AN APPROXIMATION TO THE M-STEP IN EM
As illustrated in Section 4.3, in order to introduce the “small-loss prior” to provide stronger and more robust supervision signals to the learning of CPC, in the E-step, we estimate the probability distribution of clean or unclean of samples, denoted as q(z′i), via the GMM cleaner, which is an approximation to the q(zi) in Eq. (8). And consequently, we replace the q(zi) in Eq. (9) with q(z′i) and find the prototype C to minimize the bound, which makes the loss function LC in Eq. (5) an approximation to Eq. (9). The detailed analysis on the relationship between Eq. (5) and Eq. (9) is as follows.
Firstly, we replace the estimation of CPC q(zi) in Eq. (9) with the estimation of GMM cleaner q(z′i) which is a justified approximate to q(zi):
Cnew = argmin C − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi)
≈ argmin C − ∑ D ∑ z′i∈{0,1} q(z′i) log p(yi|C, xi)
= argmin C − ∑ D [q(z′i = 0) log p(yi|C, xi) + q(z′i = 1) log p(yi|C, xi)]
(14)
In Eq. (5), q(z′i) is quantified to 1 and 0 by the threshold τ , which makes it a “hard” version to Eq. (14). Specifically, the first term in Eq. (14) updates the prototypes C to better align the samples, that classified as clean, with labeled class prototypes. It is equivalent with the effect of Eq. (5) to positive samples, where:
l = log(sigmoid(v′ic ⊤ k )), k = yi, z ′ i = 0 (15)
where v′i is the embedding of sample xi. The second term in Eq. (14) updates C to prevent the samples, that classified as noise, aligning with labeled class prototypes so as to better recognize the sample as noise (i.e., z′i = 1), which is equivalent with the effect of Eq. (5) reducing the probability of negative samples to be recognized as clean:
l = log(1− sigmoid(v′ic⊤k )), k = yi, z′i = 1 (16)
A.5 ILLUSTRATION TO THE OVERALL FRAMEWORK
In this paper, we plug CPC to the popular DivideMix framework. We delineate the overall training framework in Alg.1.
Algorithm 1 CPC based DivideMix 1: Input: Dataset D = (X,Y ), DNNs θ(1), θ(2), CPC with class prototypes C(1), C(2), clean
probability τ , CPC warm-up period e. 2: θ(1), θ(2) = WarmUp(X,Y, θ(1)), WarmUp(X,Y, θ(2)) //standard training to warm-up DNNs
3: while epoch < MaxEpoch do 4: // get GMM cleaners by loss distribution modeling and calculate clean/noise probability distribution 5: Q(2)(Z ′) =GMM(X,Y, θ(1)) 6: Q(1)(Z ′) =GMM(X,Y, θ(2)) 7: // calculate clean/noise probability distribution via CPC 8: Q(2)(Z) =CPC(X,Y, θ(1), C(1)) 9: Q(1)(Z) =CPC(X,Y, θ(2), C(2)) 10: for r ∈ {1, 2} do 11: // stage1 begin 12: XGMM(r) = {(xi, yi, wi)|wi = q(r)(z′i = 0), q(r)(z′i = 0) > τ, (xi, yi) ∈ D, q(r)(z′i = 0) ∈ Q(r)(Z ′ = 0)} 13: UGMM(r) = {xi|q(r)(z′i = 0) ≤ τ, xi ∈ X, q(r)(z′i = 0) ∈ Q(r)(Z ′ = 0)} 14: Get noise labels {yi|(xi, yi) ∈ D,xi ∈ UGMM(r)} 15: Update Ck based on Eq.5 16: // stage1 end 17: // stage2 begin 18: if epoch < e then 19: X (r) =XGMM(r),U (r) = UGMM(r) //use data partition from GMM cleaner to update DNNs during the CPC warm-up period 20: else 21: X (r) = {(xi, yi, wi)|wi = q(r)(zi = 0), q(r)(zi = 0) > τ, (xi, yi) ∈ D, q(r)(zi = 0) ∈ Q(r)(Z = 0)} 22: U (r) = {xi|q(r)(zi = 0) ≤ τ, xi ∈ X, q(r)(zi = 0) ∈ Q(r)(Z = 0)} 23: end if 24: Update θr based on Eq.11 as in standard DivideMix 25: // stage2 end 26: end for 27: epoch← epoch+ 1 28: end while Output: DNNs θ(1), θ(2) | 1. What is the focus and contribution of the paper on semi-supervised noisy label learning?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and empirical performance?
3. What are the weaknesses of the paper, especially regarding the introduction of new hyperparameters and a new loss term?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Semi-supervised noisy label learning works by learning a Gaussian mixture model on the loss distribution, removes the labels of the "noisy" examples and perform semi-supervised learning. This paper proposes a new method to improve how to detect noisy samples. In short, the proposed method learns a separate embedding space using intra-class regularization, and performs clustering on this embedding space to detect noisy samples. The proposed method is evaluated on standard benchmakrs like Webvision and Clothing1M and outperforms previous state-of-the-art method. Overall, I think the proposed method is a novel improvement for the semi-supervised noisy label learning framework and I recommend acceptance.
Strengths And Weaknesses
Strength
The proposed method is novel
The proposed method is empirically strong
Weakness
the proposed method introduces a few more hyperparameters and a new loss term. In reality when we don't have a clean validation set, these hyperparameters can be difficult to set.
Clarity, Quality, Novelty And Reproducibility
The writing is clear and I believe the submission to be reproducible. |
ICLR | Title
Class Prototype-based Cleaner for Label Noise Learning
Abstract
Semi-supervised learning based methods are current SOTA solutions to the noisylabel learning problem, which rely on learning an unsupervised label cleaner first to divide the training samples into a labeled set for clean data and an unlabeled set for noise data. Typically, the cleaner is obtained via fitting a mixture model to the distribution of per-sample training losses. However, the modeling procedure is class agnostic and assumes the loss distributions of clean and noise samples are the same across different classes. Unfortunately, in practice, such an assumption does not always hold due to the varying learning difficulty of different classes, thus leading to sub-optimal label noise partition criteria. In this work, we reveal this long-ignored problem and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC). Unlike previous works treating all the classes equally, CPC fully considers loss distribution heterogeneity and applies class-aware modulation to partition the clean and noise data. CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously and thus can better distinguish clean and noise labels. We theoretically justify the effectiveness of our method by explaining it from the Expectation-Maximization (EM) framework. Extensive experiments are conducted on the noisy-label benchmarks CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results show that CPC consistently brings about performance improvement across all benchmarks.
1 INTRODUCTION
Deep Neural Networks (DNNs) have brought about significant progress to the computer vision community over past few years. One key to its success is the availability of large amount of training data with proper annotations. However, label noise is very common in real-world applications. Without proper intervention, DNNs would be easily misled by the label noise and yield poor performance.
In order to improve the performance of DNNs when learning with noise labels, various methods have been developed (Liu et al., 2020; Li et al., 2020a; Reed et al., 2014; Nishi et al., 2021). Among them, semi-supervised learning based methods (Nishi et al., 2021; Li et al., 2020a) achieve the most competitive results. The semi-supervised learning methods follow a two-stage pipeline. They first model the loss distribution of training samples to construct a noise cleaner based on the “small-loss prior” (Han et al., 2020), which says in the early stage of training, samples with smaller crossentropy losses are more likely to have clean labels. The prior is widely adopted and demonstrated to be highly effective in practice (Han et al., 2020). Given the noise cleaner, the training samples are divided into a labeled clean set and an unlabeled noise set. Then, semi-supervised learning strategies like MixMatch (Berthelot et al., 2019) are employed to train DNNs on the divided dataset.
The key to their performance lies in the accuracy of the label-noise cleaner (Cordeiro et al., 2022). Usually, a single Gaussian Mixture Model (GMM) (Li et al., 2020a) is used to model the loss distribution of all the training samples across different categories. However, this modeling procedure is class-agnostic, which assumes a DNN model has the same learning speed to fit the training samples in different categories, thus the same loss value on samples in different categories can reflect the same degree of noise likelihood.
Unfortunately, such assumption does not hold in practise. In Fig. 1, we present the cross-entropy loss distribution of training samples at the end of DNNs warm-up period. We conduct Kolmogorov-
Smirnov test (Massey Jr, 1951) to quantify the loss distribution difference between the samples in each class and samples in the whole dataset. The results show that for 54% categories in CIFAR-100 under 90% symmetric noise, the p-value is lower than 0.051 for the hypothesis test that the probability distribution of clean samples in the class is the same with the probability distribution of clean samples in the whole dataset, while the number in the case of noise samples is 53%. Therefore, the class-agnostic label noise cleaner, which establishes a overly rigid criterion shared by all the classes, would introduce more noise samples to the clean set while reject clean samples, and consequently get the model perform poorly. A straightforward remedy to the problem is to fit distinct GMMs to losses of samples in different classes respectively, yielding a class-aware GMM cleaner. Nevertheless, this class-aware modeling strategy implicitly assumes that label noise is existed in every class. In the case of asymmetric noise e.g., CIFAR10-asym40%, where samples in parts of classes are clean, such a naive strategy would classify most of hard samples in the clean classes as noise, and results in negative affect on model training.
Considering that images in the same category should share similar visual representations, the similarity between a sample and the cluster center (e.g., class prototype) of its labeled class is helpful for recognizing label noise. In this paper, we propose a simple Class Prototype-based label noise Cleaner (CPC) to apply class-aware modulation to the partitioning of clean and noise data, which takes advantage of intra-class consistency regularization in feature space and loss distribution modeling, simultaneously. CPC learns embedding for each class, i.e., class prototypes, via intra-class consistency regularization, which urges samples in the same class to gather around the corresponding class prototype while pushes samples not belonging to the class away. Unlike the aforementioned naive class-aware GMM cleaner, CPC apply class-aware modulation to label noise partitioning via representation similarity measuring without assuming that label noise is existed in every class, which is more general for different label noise scenarios. Meanwhile, CPC leverages the “small-loss prior” to provide stronger and more robust supervision signals to facilitate the learning of prototypes.
We plug CPC to the popular DivideMix(Li et al., 2020a) framework, which iterates between label noise partitioning and DNNs optimization. With the stronger label noise cleaner in the first stage, DNNs can be trained better in the second stage, which would further improve the learning of class prototypes. We theoretically justify the procedure from Expectation-Maximization algorithm perspective, which guarantees the efficacy of the method. We conduct extensive experiments on multiple noisy-label benchmarks, including CIFAR-10, CIFAR-100, Clothing1M and WebVision. The results clearly show that CPC effectively improves accuracy of label-noise partition, and brings about consistently performance improvement across all noise levels and benchmarks.
The contribution of our work lie in three folds: (1) We reveal the long-ignored problem of classagnostic loss distribution modeling that widely existed in label noise learning, and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC); (2) CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space si-
1A p-value < 0.05 suggests the probability that the class-wise loss distribution are the same with the global loss distribution is lower than 5%.
multaneously, which can better distinguish clean and noise labels; (3) Extensive experimental results show that our method achieves competitive performance compared to current SOTAs.
2 RELATED WORK
Recent advances in robust learning with noisy labels can be roughly divided into three groups. (a) Label correction methods aim to translate wrong labels into correct ones. Early studies rely on an auxiliary set with clean samples for clean label inference (Xiao et al., 2015a; Vahdat, 2017; Li et al., 2017b; Lee et al., 2018). Recent efforts focus on performing label correction procedures without supervision regarding clean or noise labels. (Yi & Wu, 2019a; Tanaka et al., 2018) propose to jointly optimize labels during learning model parameters. Li et al. (2020b) propose to correct corrupted labels via learning class prototypes and utilize the pseudo-label generated by measuring the similarity between prototypes and samples to train model. Wu et al. (2021) and Li et al. (2021) introduce neighbouring information in feature space to correct noise label, and propose a graphbased method and a class prototype-based method, respectively. (b) Sample selection methods select potential clean samples for training to eliminate the effect of noise labels on learning the true data distribution. (Han et al., 2018; Jiang et al., 2018; 2020; Yu et al., 2019) involve training two DNNs simultaneously and focus on the samples that are probably to be correctly labeled. (c) Semisupervised learning methods conceal noise labels and treat these samples as unlabeled data (Ding et al., 2018). DivideMix (Li et al., 2020a) is a typical algorithm among these works, which compromises an unsupervised label noise cleaner that divides the training data to a labeled clean set and an unlabeled noise set, followed by semi-supervised learning that minimize the empirical vicinal risk of the model. Inspired by DivideMix, a series of methods (Cordeiro et al., 2022; Nishi et al., 2021; Cordeiro et al., 2021) are proposed, which achieve SOTA performance. However, all these methods rely on the class-agnostic loss distribution modeling to achieve the label noise cleaner, which hinders the performance of the model. The class-agnostic loss distribution modeling implicitly assumes a DNN model has the same learning speed to memory training samples in different categories. However, in reality, the memorization speed are actually different and will cause the the problem of under learning in hard classes as revealed by Wang et al. (2019). In this paper, we focuses on another problem, i.e., class agnostic loss distribution modeling problem caused by the issue in the context of label noise cleaner. In our method, we propose the simple yet effective class prototype-based label noise cleaner to solve the problem. Besides, compared to previous prototype-based label noise learning methods (Li et al., 2020b; 2021), our method are different from them in two folds: (1) we utilize prototypes as label noise cleaner to effectively improve the semi-supervised learning based methods; (2) CPC takes advantage of both loss distribution modeling and intra-class consistency regularization in feature space simultaneously which learns better prototypes.
3 PRELIMINARY
In label noise learning, given a training set D = (X,Y ) = {(xi, yi)}Ni=1, where xi is an image and yi ∈ {1, 2, ...,K} is the annotated label over K classes, the label yi could differ from the unknown true label ŷi. In this paper, we follow the popular label noise learning framework DivideMix (Li
et al., 2020a), which first warms up the model for a few epochs by training on all the data using the standard cross-entropy loss, and then trains the model by iterating a two-stage pipeline. The pipeline comprises an unsupervised label cleaner Q to divide training samples into a labeled set for clean data X and an unlabeled set for noise data U , followed by a semi-supervised learning stage that trains the model to minimise the empirical vicinal risk (EVR) (Zhang et al., 2017):
ℓEV R = 1 |X ′| ∑ X ′ ℓX ′(p(ỹ′i|x ′ i), y ′ i) + λ |U ′| ∑ U ′ ℓU ′(p(ỹ′i|x ′ i), y ′ i), (1)
where X ′ and U ′ indicate MixMatch (Berthelot et al., 2019) augmented clean and noise set. lX ′ and lU ′ denote the losses for samples in set X ′ and U ′, which are weighted by λ. p(ỹ′i|x′i) is the softmax output of DNNs, where ỹ′i is the predicted label. For more details about EVR, please refer to the appendix A.1.
In Li et al. (2020a), the unsupervised label cleaner is operated under the “small-loss prior”, which is widely adopted and demonstrated to be highly effective (Han et al., 2020). The prior assumes that in the early stage of training, samples with smaller cross-entropy losses are more likely to have clean labels. The well known insight behind the “small-loss prior” is that DNNs tend to learn simple patterns first before fitting label noise (Arpit et al., 2017). Given a training sample xi and the softmax output p(ỹi|xi) of DNNs, where ỹi is the predicted label, the cross-entropy loss l(p(ỹi|xi), yi) reflects how well the model fits the training sample.
To achieve the unsupervised label cleaner Q, a two-component Gaussian Mixture Model (GMM) is employed to fit the loss distribution of all training samples, i.e., ℓ(p(ỹi|xi), yi) ∼ ϕ0N (µ0, σ0) + ϕ1N (µ1, σ1), where µ0 < µ1, and ϕ is a mixing coefficient. The component with smaller mean represents the distribution of clean samples and the other one is for noise samples. We use zi ∈ {0, 1} indicates the data is clean or not. Then, q(zi = 0) represents the clean probability of xi, which is the posterior probability of its loss belonging to the clean component. The label cleaner is shared by training samples across different classes, which is actually class-agnostic. A hypothesis implicitly accompanying this loss distribution modeling method is ignored by current works, which assumes the loss distributions of clean and noise samples are consistent across different categories. Unfortunately, as illustrated in Fig.1, the hypothesis dose not hold in practise. In this paper, we propose the class prototype-based label noise cleaner which applies class-aware modulation to the partitioning of clean and noise data and improves label noise learning.
4 METHODOLOGY
4.1 OVERVIEW
Our method follows the two-stage label noise learning framework DivideMix (Li et al., 2020a) and improves the framework with the proposed CPC. CPC comprises class prototypes C = {ck ∈ R1×d|k = 1, 2, ...,K}, where ck indicates the prototype of k-th class and d is the dimension of prototype embedding. Our DNN model consists of a CNN backbone, a classifier head and a projection layer. The backbone maps an image input xi to a feature vector vi ∈ R1×D. The classifier takes vi as input and outputs class prediction p(ỹi|xi). The projection layer serves to project the high dimension feature vi to a low-dimensional embedding v′i ∈ R1×d, where d < D. As shown in Fig. 2, we update the DNN as well as the CPC by iterating a two-stage training pipeline in every epoch. In the first stage, we update CPC as well as the projector in DNN, and utilize the updated CPC to partition label noise. We first calculate the cross-entropy loss of every training sample and fits a GMM to the losses. We utilize the GMM as a label noise cleaner to get a labeled clean set XGMM and a unlabeled noise set UGMM . The data partition XGMM and UGMM are utilized to update the prototypes in CPC and parameters in the projector. Note that we cut off the gradient back-propagation from the projector to the CNN backbone. Then, the updated CPC is employed to re-divide the training data into another two set X and U . In the second stage, we train DNN model to minimise the EVR in Eq. (1) with data partitioned by the cleaner. In the first e epochs, we wait CPC to warm up, and minimise the EVR of DNNs based on training data partitioned by the GMM cleaner. After the e-th epoch, the label noise estimation results of CPC, i.e., X and U are employed to train DNNs, while the estimation results of GMM cleaner are only used to update
prototypes in CPC. In inference, we utilize DNN classifier for image recognition, directly. In A.5, we further delineate the full framework.
4.2 CLASS PROTOTYPE-BASED LABEL NOISE CLEANER
In order to apply class-aware modulation to the label noise partitioning, we propose to learn an embedding space where samples from the same class are aligned with their class prototypes, and leverage the prototypes to recognize noise labels. The prototypes are typically learnt with intra-class consistency regularization, which urges samples in the same class to align with the corresponding class prototype while keeping samples not belonging to the class away. Previous methods (Wang et al., 2022; Li et al., 2020b) apply the intra-class consistency regularization to prototype learning via unsupervised contrastive objectives, e.g., prototypical contrastive objective (Li et al., 2020c), where the unsupervised training labels are typically determined by the similarity between samples and prototypes. The accuracy of the training labels are highly depends on the quality of representation learnt by the CNN encoder, which would be too low to effectively update the prototypes, especially in the early stage of training. In contrast, we empirically find that the GMM cleaner, which is operated under the well evaluated “small-loss prior”, are not as sensitive as the prototypes to the representation quality, and can provide more robust and accurate training labels.
Therefore, we propose to take samples in clean set XGMM as positive samples and those in noise set UGMM as negative samples to update prototypes. Specifically, given the feature embedding v′i of a sample xi from XGMM, we update prototypes C as well as the parameters of the projector to maximize the score q(zi = 0) between ck=yi and v ′ i, and minimize the score between ck ̸=yi and v ′ i via minimize LXGMM :
LXGMM = − 1 |XGMM | ∑
XGMM K∑ k=1 ℓk(v ′ i, yi), where
ℓk(v ′ i, yi) =
{ log(sigmoid(v′ic ⊤ k )), k = yi,
λneg log(1− sigmoid(v′ic⊤k )), k ̸= yi,
(2)
where λneg = 1K weights the losses between positive pair and negative pairs to avoid under-fitting the positive samples. Given v′i of a sample xi from UGMM, we update prototypes ck as well as the parameters of the projector to minimize the score q(zi = 0) between ck=yi and v ′ i via minimizing LUGMM :
LUGMM = − 1 |UGMM | ∑
UGMM log(1− sigmoid(v′ic⊤k )), where k = yi. (3)
At last, for noise samples in UGMM with high classification confidence, the samples are more likely to belong to the class predicted by DNNs, which is potentially valuable to the update of prototypes. Therefore, we collect such training samples XP from UGMM taking the averaged classification confidence of samples in XGMM as the threshold. Specifically, given a sample in UGMM with the label predicted by DNNs k = maxk(p(ỹi|xi)), the sample is collected into XP if p(ỹi|xi)k > average({p(ỹi|xi)k|(xj , yj |yj = k) ∈ XGMM}). Then, we update the prototypes and projectors to minimize L(XP ):
LXP = − 1 |XP | ∑ XP log(sigmoid(v′ic ⊤ k )),where k = max k (p(ỹi|xi)). (4)
The overall empirical risk LC for prototypes and the projector is as follows: LC = LXGMM + LUGMM + αLXP , (5) where α is the weight scalar.
CPC distinguishes a clean sample (xi, yi) with the score q(zi = 0) = sigmoid(v′ic ⊤ k=yi ) and the threshold τ . Samples with q(zi = 0) > τ are classified as clean, and otherwise as noise.
4.3 THEORETICAL JUSTIFICATION ON THE EFFICACY OF CPC
We provide theoretical justification on the efficacy of CPC from the perspective of ExpectationMaximization algorithm, which guarantees that though CPC does not follow the classical prototypical contrastive objective, it can still learn meaningful prototypes and act as an effective cleaner.
We consider training data with label noise D = (X,Y ) = (xi, yi) N i=1 as the observable data, and Z ∈ {0, 1}N as the latent variable, where zi = 0 iff (xi, yi) is clean (i.e., yi = ŷi). The prototypes C in the cleaner are taken as parameters expected to be updated. Then, the negative log likelihood for D given C is as follows:
NLL(D|C) = − ∑ D log ∑
zi∈{0,1}
p(xi, yi, zi|C) = − ∑ D log ∑
zi∈{0,1}
q(zi) p(xi, yi, zi|C)
q(zi) , (6)
where q(zi) = p(zi|xi, yi, C). According to the Bayes theorem and Jensen’s inequality , we have NLL(D|C) = − ∑ D log ∑
zi∈{0,1}
q(zi)p(xi, yi|C),
≤ − ∑ D ∑ zi∈{0,1} q(zi) log p(xi, yi|C)
= − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) + const,
(7)
where − ∑
D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) is the upper bound of NLL(D|C). Typically, we can
adopt the EM algorithm to find the prototypes C that minimize the upper bound by iterating:
E-step: Compute a new estimate of q(zi) (i.e., clean or noise) according to prototypes Cold from the last iteration: q(zi) = p(zi|xi, yi, Cold). (8) M-step: Find the prototypes C that minimizes the bound:
Cnew = argmin C − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi) (9)
In our method, in order to introduce the “small-loss prior” to provide stronger and more robust supervision signals to the learning of CPC, in the E-step, we estimate the distribution of clean or noise of samples, denoted as q(z′i), via the GMM cleaner instead of q(zi) in Eq. (8). And consequently, we replace the q(zi) in Eq. (9) to q(z′i) and find the prototype C minimize the bound. Next, we provide the justification that the EM algorithm still work by proving that q(z′i) can be considered as an approximation to q(zi) in our framework.
In our method, q(z′i) = p(z ′ i|l(p(ỹi|xi), yi)), where ỹi ∼ p(ỹi|xi, θ), which is the label predicted by the DNN parameterized by θ. As introduced in section 4.1, in the first stage of each epoch, the CPC’s estimation results zi ∼ q(zi) are utilized to divide training samples into a labeled set for clean data X = {(xi, yi)|zi = 0} and an unlabeled set for noise data U = {(xi, yi)|zi = 1}. Then the parameters of DNNs, which we denote as θ, are optimized using Eq. (1) in the second stage. There exists an optimal θ∗ with respected to zi, with which the softmax output p(ỹi|xi) of DNNs satisfies:
ℓ(p(ỹi|xi), yi) = 0, if zi = 0, otherwise 1, (10) where ℓ(p(ỹi|xi), yi) is the cross-entropy loss between the network prediction and the annotated label. With these loss values, the subsequent GMM cleaner can easily distinguish samples of X from samples of U . In other words, under the optimal θ∗, the estimation of the GMM cleaner would be consistent with the partition of CPC, i.e., z′i = zi. In practice, in each epoch, we takes the θ optimized to minimize Eq. (1) as an approximation to the optimal θ∗ with respect to zi, and consequently we can get q(z′i) as an approximation to q(zi). Therefore, we can see that with the “small loss prior” introduced into the prototype learning, the EM optimization procedure would still work, which guarantees CPC can learn meaningful prototypes and act as an effective cleaner. In appendix A.4, we further present more details and empirical results to demonstrate the approximation is hold in practice.
5 EXPERIMENTS
5.1 DATASETS AND IMPLEMENTATION DETAILS
Datasets. We evaluate our method on the following popular LNL benchmarks. For CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), we experiment with two types of synthetic noise: symmetric
and asymmetric, which are injected into the datasets following the standard setup in (Li et al., 2020a). Clothing1M (Xiao et al., 2015b) and WebVision1.0 (Li et al., 2017a) are two large-scale real-world label noise benchmarks. Clothing1M contains 1 million images in 14 categories acquired from online shopping websites, which is heavily imbalanced and most of the noise is asymmetric (Yi & Wu, 2019b). WebVision1.0 contains 2.4 million images crawled from the web using the concepts in ImageNet-ILSVRC12 (ILSVRC12). Following convention, we compare with SOTAs on the first 50 classes of WebVision, as well as the performance after transferring to ILSVRC12.
Implementation details. We plug the proposed CPC to the DivideMix (Li et al., 2020a) framework. For Clothing1M and CIFAR-10 with asymmetric noise, we employ a single class-agnostic GMM for loss-distribution modeling. For other cases, we find that class-aware GMMs would further improve the performance of CPC. Following DivideMix, we employ ResNet18 (He et al., 2016) for CIFAR10 and CIFAR-100, and utilize ImageNet pre-trained ResNet-50 for Clothing1M. Since previous works chose different backbones, e.g., Inception-resnet v2 (Szegedy et al., 2017) and ResNet-50, we adopt the weaker one, i.e., ResNet-50 according to (Zheltonozhskii et al., 2021), and train it from scratch for fair comparison. The threshold of CPC τ is set 0.5 by default for all the datasets except for the extremely imbalanced Clothing1M where it is set to 0.3. For CIFAR-10 and CIFAR100, we train the models for 450 epochs. For the large-scale dataset Clothing1M and WebVision1.0, we train the model for 80 and 100 epochs, respectively. The warm-up periods of prototypes for all the datasets is set to the first 5% epochs after network warm-up, except in CIFAR-100 with noise ratios larger than 80% when set to 10% of total epochs. For the other settings, we simply follow the standard set-up as in DivideMix. For more implementation details, please refer to the appendix A.2 and codes in supplementary materials.
5.2 COMPARISON WITH STATE-OF-THE-ART METHODS
Real-world noise benchmarks. We evaluate our method on real-world large scale data sets, and compare our method with latest SOTA label noise learning methods, including DivideMix(Li et al., 2020a), LongReMix(Cordeiro et al., 2022), NGC(Wu et al., 2021), GJS(Englesson & Azizpour, 2021), ELR+(Liu et al., 2020), AugDMix(Nishi et al., 2021) and NCR(Huang et al., 2021). For WebVision, we measure the top1 and top5 accuracy on WebVision validation set and ImageNet ILSVRC12 validation set. We take ResNet50-based DivideMix (Zheltonozhskii et al., 2021) as baseline. As shown in Table 1, our CPC improves top1 and top5 accuracy over baseline model on WebVision by 3.33% and 2.81%, respectively. Our method achieves competitive performance on WebVision, and shows stronger transferable capability, outperforming other competitors on the ILSVRC12 validation set significantly. For Clothing1M, we apply the strong augmentation strategy (Nishi et al., 2021) to DivideMix as our baseline, and rerun the method three times. Our method achieves 75.4% accuracy on this challenging benchmark, outperforming all the other SOTAs. We also notice that though NCR achieves SOTA result on WebVision, it shows moderate performance compared to ELR+, DivideMix and AugDMix on Clothing1M containing asymmetric noise with imbalanced data distribution. It reveals that our method could be more robust across different label noise scenarios.
Synthetic noise benchmarks. We evaluate the performance of CPC on CIFAR-10 and CIFAR-100 datasets with symmetric label noise level ranging from 20% to 90% and asymmetric noise of rate 40%. We take AugDMix as the baseline, and compare our method with latest SOTA methods, where DivideMix, LongReMix and Aug-DMix are semi-supervised learning based methods. Following NGC and GJS, we run our method three times with different random seeds and report the mean and standard deviation. For other methods, e.g., ProtoMix (Li et al., 2021), we report the best results reported in their papers. As shown in Table 2, though with a baseline method as strong as AugDMix, our method brings about performance improvement across all noise levels as well as noise types consistently, and establishes new SOTAs on CIFAR-10 and CIFAR-100. Additionally, we notice that, under asymmetric noise set-up, semi-supervised learning based methods consistently outperform other methods that achieve SOTA results on WebVision benchmark,including NGC, GJS and NCR. The results reveal that semi-supervised learning based method could be more robust to asymmetric noise, while our method achieves SOTA performance among them.
5.3 ANALYSIS
Is CPC a better label noise cleaner? We evaluate the performance of label noise cleaner under both symmetric and asymmetric label noise set-ups. For symmetric noise, we use CIFAR-100 with 90% noise as benchmark to reveal the relationship between CPC and the significant performance improvement under this set-up. For asymmetric noise, we employ the most commonly adopted CIFAR-10-asym40% as benchmark. The AUC of clean/noise binary classification results of a cleaner is calculated as the evaluation metric. We take the original class-agnostic GMM cleaner (GMMagn) proposed in DivideMix as baseline, and compare it to our CPC and the aforementioned naive class-aware GMM cleaner (GMMawr). Furthermore, we also implement another version of CPC that trained based on the class-aware GMM cleaner. To distinguish these two CPC, we denote the regular one trained based on conventional class-agnostic GMM cleaner as CPCagn, and the other one as CPCawr. As shown in Figure 3, in both cases, the regular CPCagn outperforms the baseline GMMagn as well as GMMawr, which demonstrates our class prototype-based method is the better label noise cleaner. As for the comparison between GMMagn and GMMawr, we find that in the situation of high symmetric noise, though GMMagn shows better performance in the early stage of training, GMMawr outperforms it in the second half stage of training. In the case of asymmetric noise, GMMawr, which tend to classify hard clean samples in clean categories as noise wrongly, consistently underperforms GMMagn across the whole training period. The results further prove that our class prototype-based method is the better choice for applying class-aware modulation to label noise cleaning, which is more robust across different noise types. Moreover, we find that in the case of asymmetric noise, CPCagn achieves higher AUC compared to GMMagn, which shows our method can partially make up for the shortcomings of GMMagn. In the case of symmetric noise, we find that GMMagn can further improve the performance of CPC, where CPCawr achieves the best performance among the four cleaners.
How do different label noise cleaners affect label noise learning? We plug different cleaners to DivideMix framework, and keep all the other training settings the same as described in the implementation details. As shown in Table 3, the final performance of the model is consistent with the performance of the cleaner used. On CIFAR-100 with 90% symmetric noise, performance improvement bought about by CPCagn are 7.68%, while model with CPCawr outperforms the baseline method by 13.4%. We also report the comparison results on large-scale WebVision dataset, where the performance of different models show the same trend of change as in CIFAR-100-sym90%. As for the asymmetric noise situation, i.e., CIFAR-10-asym40% and Clothing1M, model with CPCagn, which has superior label noise partitioning capability as shown in Fig.3, achieves best performance while CPCawr beat GMMawr in both cases. The results demonstrate that CPC is helpful to train a better model in label noise learning.
Is the GMM cleaner beneficial to the learning of prototypes? In our method, we propose to leverage the GMM cleaner to facilitate the learning of prototypes via the “small loss prior”. To validate the effectiveness of our method, we first compare the quality of prototypes learnt in CPC with prototypes learnt in another prototype-based label noise learning method MoPro (Li et al., 2020b). We take WebVision as benchmark and utilize prototypes to classify test samples via measuring the similarity between samples and prototypes. The results show that, on the first 50 classes of WebVision, our prototype achieves a top1 accuracy of 78.44%, while MoPro’s accuracy is 72.23%, which demonstrates that our method is able to learn better prototypes. To further verify the contribution of the GMM cleaner, we remove the GMM cleaner and learn class prototypes in CPC via the typical prototypical contrastive objective as in MoPro. In experiments, we find that without the help of the GMM cleaner, the learnt prototypes generate less accurate data partition that further drawing back the overall training framework for DNNs, which proves the benefits of the GMM cleaner to our method. For more details and discussion, please refer to A.3.
6 CONCLUSION
In this paper, we reveal the long-ignored problem of class-agnostic loss distribution modeling that widely existed in label noise learning, and propose a simple yet effective solution, named Class Prototype-based label noise Cleaner (CPC). CPC takes advantage of loss distribution modeling and intra-class consistency regularization in feature space simultaneously, which can better distinguish clean and noise labels. We justify the effectiveness of our method by explaining it from the EM algorithm perspective theoretically and providing extensive empirical proves. The experimental results show that our method achieves competitive performance compared to current SOTAs.
A APPENDIX
A.1 EMPIRICAL VICINAL RISK
We introduce the Empirical Vicinal Risk following Cordeiro et al. (2022). In the semi-supervised learning based label noise learning framework, with the labeled set X and unlabeled set U from a cleaner, the DNNs are trained to minimise the empirical vicinal risk (EVR) (Zhang et al., 2017):
ℓEV R = 1 |X ′| ∑ X ′ ℓX ′(p(ỹ′i|x ′ i), y ′ i) + λ(U ′) |U ′| ∑ U ′ ℓU ′(p(ỹ′i|x ′ i), y ′ i), (11)
where lX ′ and lU ′ denote the losses for set X ′ and U ′, which are weighted by λ(U ′). X ′ and U ′ indicate MixMatch (Berthelot et al., 2019) augmented clean and noise set:
X ′ = (x′i, y′i) : (x′i, y′i) ∼ f(x′i, y′i|xi, yi), (xi, yi) ∈ X , U ′ = (x′i, y′i) : (x′i, y′i) ∼ f(x′i, y′i|xi, yi), (xi, yi) ∈ U ,
(12)
with
f(x′i, y ′ i|xi, yi) =
1 |X ∪ U| ∑ X∪U Eλ[δ(x′i = λxi + (1− λ)xj , y′i = λyi + (1− λ)yj)], (13)
where δ is a Dirac mass centered at (x′, y′), λ ∼ Beta(a, a), and a ∈ (0,+inf).
A.2 OTHER TRAINING DETAILS
A.2.1 TRAINING CONFIGURATIONS
In our method, we follow most of training set-up of DivideMix(Li et al., 2020a). We present the detailed training configures as follows:
• CIFAR-10 and CIFAR-100. For all the experiments on CIFAR, we train our DNN model as well as class prototypes in CPC via SGD with a momentum of 0.9, a weight decay of 0.0005, and a batch size of 128. The network is trained for 450 epochs. We set the initial learning rate as 0.02, and reduce it by a factor of 10 after 225 epochs. The warm up period for the DNN is 10 epochs. The weight λ(U
′) is set to {0,25,50,150} as in DivideMix. • Clthing1M. We train our DNN model as well as class prototypes in CPC via SGD with a
momentum of 0.9, a weight decay of 0.001, and a batch size of 32. The model is trained for 80 epochs. The warm up period for the DNN is 1 epoch. The initial learning rate is set as 0.002 and reduced by a factor of 10 after 40 epochs. For each epoch, we sample 1000 mini-batches from the training data. The weight λ(U ′) is set to 0.
• WebVision. We train our DNN model as well as class prototypes in CPC via SGD with a momentum of 0.9, a weight decay of 0.001, and a batch size of 32. The model is trained for 100 epochs. The warm up period for the DNN is 1 epoch. The initial learning rate is set as 0.01 and reduced by a factor of 10 after 50 epochs. For each epoch, we sample 1000 mini-batches from the training data. The weight λ(U ′) is set to 0.
A.2.2 HYPER-PARAMETER STUDY
In this paper, we mainly follow the tuning procedure as in DivideMix to determine the newly introduced hyper-parameters. First of all, we initialize the hyper-parameters to e = 5%, τ = 0.5, α = 1.
Then, for the large scale real world benchmark Clothing1M and WebVision, the hyper-parameter tuning is done on the validation set of Clothing1M and transferred to WebVision. For CIFAR, a small validation set with clean data is split from training data for hyper-parameter tuning. Due to the diversity of experimental set-ups, it would be an irritating task to tune hyper-parameters for each experimental set-up, respectively. Therefore, we only tune the hyper-parameters under CIFAR-100(sym80%) and CIFAR-100(sym50%), and transfer the hyper-parameters obtained under CIFAR-100(sym80%) to the noisier set-up i.e., CIFAR-100(sym90%), and those obtained under CIFAR-100(sym50%) to the less challenge set-ups i.e., noise ratio lower than 50% and all noise ratio on CIFAR-10.
In practical, when a clean validation set is inaccessible, it would be the difficult to tune the hyperparameters. To shed some light to the hyper-parameter set-up in these cases, we try to conclude some empirical solutions via studying the variation of performance of CPC with respect to the newly introduced hyper-parameters on different benchmarks. According to experimental results, we find that CPC is robust in the choice of hyperparameters in the range listed in Tab.4. Generally, e = 5%/10%, τ = 0.5, α = 0/1 can be a good choice in most cases.
A.3 DISCUSSION ON THE CONTRIBUTION OF GMM CLEANER TO CPC
In typical prototypical contrastive objective, the unsupervised training labels are determined by similarity between samples and prototypes. Compared to it, we empirically find that GMM cleaner provides more accurate training labels for prototypes, especially in the early stage of training. For example, in CIFAR-10(asym-40%), the averaged accuracy of training labels from GMM cleaner is 9.7% higher during the CPC warming up period.
To evaluate the contribution of GMM cleaner in our framework, we further present ablation study results in Tab. 5. For CPC w/o GMM Cleaner, we remove the GMM cleaner and learn class prototypes in CPC with prototypical contrastive objective as in MoPro (Li et al., 2020b). In experiments, we find that without the help of the GMM cleaner, the learnt prototypes generate less accurate data partition that further drawing back the overall training framework for DNNs as shwon in Tab. 5. The situation is especially severe on the challenging benchmark with more diverse data, e.g., WebVision. The results demonstrate the benefits of the GMM cleaner in our method.
To prove the superiority of our method, we also compare the quality of prototypes learnt in our method with prototypes learnt in MoPro (Li et al., 2020b) on the first 50 classes of WebVision. To evaluate the quality of prototypes learnt in CPC, we utilize the prototypes to classify test samples via measuring the similarity between samples and prototypes. We implement the experiment with the official code released by the MoPro team. The results show that our prototype achieves a top1 accuracy of 78.44%, while MoPro’s accuracy is 72.23%. The result demonstrates that our method is able to learn better prototypes.
A.4 SUPPLEMENTARY DISCUSSION ON THE THEORETICAL JUSTIFICATION
A.4.1 IS q(z′i) A PROPER APPROXIMATION TO q(zi) IN PRACTICAL?
In Section 4.3, we replace the estimation of CPC q(zi) in Eq. (9) with the estimation of GMM cleaner q(z′i) and justify q(z ′ i) can be considered as an approximate to q(zi). To investigate if the approximation holds in practical, we calculate the K-L Divergence as well as classification consistency between q(z′i) and q(zi). As shown in Figure 4, as the training going on, the KLD between q(z′i) and q(zi) is converged and the classification consistency increases.
A.4.2 TRAINING PROTOTYPES WITH LC IS AN APPROXIMATION TO THE M-STEP IN EM
As illustrated in Section 4.3, in order to introduce the “small-loss prior” to provide stronger and more robust supervision signals to the learning of CPC, in the E-step, we estimate the probability distribution of clean or unclean of samples, denoted as q(z′i), via the GMM cleaner, which is an approximation to the q(zi) in Eq. (8). And consequently, we replace the q(zi) in Eq. (9) with q(z′i) and find the prototype C to minimize the bound, which makes the loss function LC in Eq. (5) an approximation to Eq. (9). The detailed analysis on the relationship between Eq. (5) and Eq. (9) is as follows.
Firstly, we replace the estimation of CPC q(zi) in Eq. (9) with the estimation of GMM cleaner q(z′i) which is a justified approximate to q(zi):
Cnew = argmin C − ∑ D ∑ zi∈{0,1} q(zi) log p(yi|C, xi)
≈ argmin C − ∑ D ∑ z′i∈{0,1} q(z′i) log p(yi|C, xi)
= argmin C − ∑ D [q(z′i = 0) log p(yi|C, xi) + q(z′i = 1) log p(yi|C, xi)]
(14)
In Eq. (5), q(z′i) is quantified to 1 and 0 by the threshold τ , which makes it a “hard” version to Eq. (14). Specifically, the first term in Eq. (14) updates the prototypes C to better align the samples, that classified as clean, with labeled class prototypes. It is equivalent with the effect of Eq. (5) to positive samples, where:
l = log(sigmoid(v′ic ⊤ k )), k = yi, z ′ i = 0 (15)
where v′i is the embedding of sample xi. The second term in Eq. (14) updates C to prevent the samples, that classified as noise, aligning with labeled class prototypes so as to better recognize the sample as noise (i.e., z′i = 1), which is equivalent with the effect of Eq. (5) reducing the probability of negative samples to be recognized as clean:
l = log(1− sigmoid(v′ic⊤k )), k = yi, z′i = 1 (16)
A.5 ILLUSTRATION TO THE OVERALL FRAMEWORK
In this paper, we plug CPC to the popular DivideMix framework. We delineate the overall training framework in Alg.1.
Algorithm 1 CPC based DivideMix 1: Input: Dataset D = (X,Y ), DNNs θ(1), θ(2), CPC with class prototypes C(1), C(2), clean
probability τ , CPC warm-up period e. 2: θ(1), θ(2) = WarmUp(X,Y, θ(1)), WarmUp(X,Y, θ(2)) //standard training to warm-up DNNs
3: while epoch < MaxEpoch do 4: // get GMM cleaners by loss distribution modeling and calculate clean/noise probability distribution 5: Q(2)(Z ′) =GMM(X,Y, θ(1)) 6: Q(1)(Z ′) =GMM(X,Y, θ(2)) 7: // calculate clean/noise probability distribution via CPC 8: Q(2)(Z) =CPC(X,Y, θ(1), C(1)) 9: Q(1)(Z) =CPC(X,Y, θ(2), C(2)) 10: for r ∈ {1, 2} do 11: // stage1 begin 12: XGMM(r) = {(xi, yi, wi)|wi = q(r)(z′i = 0), q(r)(z′i = 0) > τ, (xi, yi) ∈ D, q(r)(z′i = 0) ∈ Q(r)(Z ′ = 0)} 13: UGMM(r) = {xi|q(r)(z′i = 0) ≤ τ, xi ∈ X, q(r)(z′i = 0) ∈ Q(r)(Z ′ = 0)} 14: Get noise labels {yi|(xi, yi) ∈ D,xi ∈ UGMM(r)} 15: Update Ck based on Eq.5 16: // stage1 end 17: // stage2 begin 18: if epoch < e then 19: X (r) =XGMM(r),U (r) = UGMM(r) //use data partition from GMM cleaner to update DNNs during the CPC warm-up period 20: else 21: X (r) = {(xi, yi, wi)|wi = q(r)(zi = 0), q(r)(zi = 0) > τ, (xi, yi) ∈ D, q(r)(zi = 0) ∈ Q(r)(Z = 0)} 22: U (r) = {xi|q(r)(zi = 0) ≤ τ, xi ∈ X, q(r)(zi = 0) ∈ Q(r)(Z = 0)} 23: end if 24: Update θr based on Eq.11 as in standard DivideMix 25: // stage2 end 26: end for 27: epoch← epoch+ 1 28: end while Output: DNNs θ(1), θ(2) | 1. What is the main contribution of the paper regarding deep neural networks for image classification?
2. What are the strengths of the proposed method compared to other approaches?
3. What are the weaknesses of the paper, particularly in terms of explanation and ablation studies?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a new method for learning Deep Neural Networks for image classification with label noise. It is based on a semi-supervised approach and a previously proposed framework called DivideMix that iterates between a label noise partitioning step where the dataset is divided into clean and noisy samples and a DNN training step that optimises the neural network parameters on the estimated partitioned dataset. The proposed approach replaces the class-agnostic partitioning by a class-aware prototype-based noise cleaner that trains a projection in a contrastive way resulting in embeddings that help to distinguish between clean and noisy samples. Experimental results show that the approach outperforms the state of the art on learning with symmetric and asymmetric label noise on standard image classification benchmarks.
Strengths And Weaknesses
Strenghts:
Original approach taking into account the difference of clean and noisy examples' loss distributions among different classes.
Good experimental results
Weaknesses:
The description of the overall procedure is quite difficult to understand (Fig. 2, Sect. 4.1).
A more detailed ablation study would have been desirable.
Clarity, Quality, Novelty And Reproducibility
The paper is mostly clear and easy to follow. However, the description of the overall procedure could be explained a bit better. Also, the justification and intuition of some design choices and/or hyperparameters are not given, e.g. the warm-up phase and the fact that both a GMM cleaner and CPC are used in the proposed approach. |